CS236781: Deep Learning on Computational Accelerators¶

Homework Assignment 2¶

Faculty of Computer Science, Technion.

Submitted by:

# Name Id email
Student 1 [your name here] [your id here] [your email here]
Student 2 [your name here] [your id here] [your email here]

Introduction¶

In this assignment we'll create a from-scratch implementation of two fundemental deep learning concepts: the backpropagation algorithm and stochastic gradient descent-based optimizers. In addition, you will create a general-purpose multilayer perceptron, the core building block of deep neural networks. We'll visualize decision bounrdaries and ROC curves in the context of binary classification. Following that we will focus on convolutional networks with residual blocks. We'll create our own network architectures and train them using GPUs on the course servers, then we'll conduct architecture experiments to determine the the effects of different architectural decisions on the performance of deep networks.

General Guidelines¶

  • Please read the getting started page on the course website. It explains how to setup, run and submit the assignment.
  • Please read the course servers usage guide. It explains how to use and run your code on the course servers to benefit from training with GPUs.
  • The text and code cells in these notebooks are intended to guide you through the assignment and help you verify your solutions. The notebooks do not need to be edited at all (unless you wish to play around). The only exception is to fill your name(s) in the above cell before submission. Please do not remove sections or change the order of any cells.
  • All your code (and even answers to questions) should be written in the files within the python package corresponding the assignment number (hw1, hw2, etc). You can of course use any editor or IDE to work on these files.

Contents¶

  • Part 1: Backpropagation
  • Part 2: Optimization and Training:
  • Part 3: Binary Classification with Multilayer Perceptrons
  • Part 4: Convolutional Neural Networks:
  • Part 5: Convolutional Architecture Experiments
  • Part 6: YOLO - Object Detection
$$ \newcommand{\mat}[1]{\boldsymbol {#1}} \newcommand{\mattr}[1]{\boldsymbol {#1}^\top} \newcommand{\matinv}[1]{\boldsymbol {#1}^{-1}} \newcommand{\vec}[1]{\boldsymbol {#1}} \newcommand{\vectr}[1]{\boldsymbol {#1}^\top} \newcommand{\rvar}[1]{\mathrm {#1}} \newcommand{\rvec}[1]{\boldsymbol{\mathrm{#1}}} \newcommand{\diag}{\mathop{\mathrm {diag}}} \newcommand{\set}[1]{\mathbb {#1}} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\pderiv}[2]{\frac{\partial #1}{\partial #2}} \newcommand{\bb}[1]{\boldsymbol{#1}} $$

Part 1: Backpropagation¶

In this part, we'll implement backpropagation and automatic differentiation from scratch and compare our implementations to PyTorch's built in implementation (autograd).

In [1]:
import torch
import unittest

%load_ext autoreload
%autoreload 2

test = unittest.TestCase()

Reminder: The backpropagation algorithm is at the core of training deep models. To state the problem we'll tackle in this notebook, imagine we have an L-layer MLP model, defined as $$ \hat{\vec{y}^i} = \vec{y}L^i= \varphi_L \left( \mat{W}_L \varphi{L-1} \left( \cdots \varphi_1 \left( \mat{W}_1 \vec{x}^i + \vec{b}_1 \right) \cdots \right)

  • \vec{b}_L \right),
$$ a pointwise loss function $\ell(\vec{y}, \hat{\vec{y}})$ and an empirical loss over our entire data set, $$

L(\vec{\theta}) = \frac{1}{N} \sum_{i=1}^{N} \ell(\vec{y}^i, \hat{\vec{y}^i}) + R(\vec{\theta}) $$

where $\vec{\theta}$ is a vector containing all network parameters, e.g. $\vec{\theta} = \left[ \mat{W}_{1,:}, \vec{b}_1, \dots, \mat{W}_{L,:}, \vec{b}_L \right]$.

In order to train our model we would like to calculate the derivative (or gradient, in the multivariate case) of the loss with respect to each and every one of the parameters, i.e. $\pderiv{L}{\mat{W}_j}$ and $\pderiv{L}{\vec{b}_j}$ for all $j$. Since the gradient "points" to the direction of functional increase, the negative gradient is often used as a descent direction for descent-based optimization algorithms. In other words, iteratively updating each parameter proportianally to it's negetive gradient can lead to convergence to a local minimum of the loss function.

Calculus tells us that as long as we know the derivatives of all the functions "along the way" ($\varphi_i(\cdot),\ \ell(\cdot,\cdot),\ R(\cdot)$) we can use the chain rule to calculate the derivative of the loss with respect to any one of the parameter vectors. Note that if the loss $L(\vec{\theta})$ is scalar (which is usually the case), the gradient of a parameter will have the same shape as the parameter itself (matrix/vector/tensor of same dimensions).

For deep models that are a composition of many functions, calculating the gradient of each parameter by hand and implementing hard-coded gradient derivations quickly becomes infeasible. Additionally, such code makes models hard to change, since any change potentially requires re-derivation and re-implementation of the entire gradient function.

The backpropagation algorithm, which we saw in the lecture, provides us with a effective method of applying the chain rule recursively so that we can implement gradient calculations of arbitrarily deep or complex models.

We'll now implement backpropagation using a modular approach, which will allow us to chain many components layers together and get automatic gradient calculation of the output with respect to the input or any intermediate parameter.

To do this, we'll define a Layer class. Here's the API of this class:

In [2]:
import hw2.layers as layers
help(layers.Layer)
Help on class Layer in module hw2.layers:

class Layer(abc.ABC)
 |  A Layer is some computation element in a network architecture which
 |  supports automatic differentiation using forward and backward functions.
 |  
 |  Method resolution order:
 |      Layer
 |      abc.ABC
 |      builtins.object
 |  
 |  Methods defined here:
 |  
 |  __call__(self, *args, **kwargs)
 |      Call self as a function.
 |  
 |  __init__(self)
 |      Initialize self.  See help(type(self)) for accurate signature.
 |  
 |  __repr__(self)
 |      Return repr(self).
 |  
 |  backward(self, dout)
 |      Computes the backward pass of the layer, i.e. the gradient
 |      calculation of the final network output with respect to each of the
 |      parameters of the forward function.
 |      :param dout: The gradient of the network with respect to the
 |      output of this layer.
 |      :return: A tuple with the same number of elements as the parameters of
 |      the forward function. Each element will be the gradient of the
 |      network output with respect to that parameter.
 |  
 |  forward(self, *args, **kwargs)
 |      Computes the forward pass of the layer.
 |      :param args: The computation arguments (implementation specific).
 |      :return: The result of the computation.
 |  
 |  params(self)
 |      :return: Layer's trainable parameters and their gradients as a list
 |      of tuples, each tuple containing a tensor and it's corresponding
 |      gradient tensor.
 |  
 |  train(self, training_mode=True)
 |      Changes the mode of this layer between training and evaluation (test)
 |      mode. Some layers have different behaviour depending on mode.
 |      :param training_mode: True: set the model in training mode. False: set
 |      evaluation mode.
 |  
 |  ----------------------------------------------------------------------
 |  Data descriptors defined here:
 |  
 |  __dict__
 |      dictionary for instance variables (if defined)
 |  
 |  __weakref__
 |      list of weak references to the object (if defined)
 |  
 |  ----------------------------------------------------------------------
 |  Data and other attributes defined here:
 |  
 |  __abstractmethods__ = frozenset({'backward', 'forward', 'params'})

In other words, a Layer can be anything: a layer, an activation function, a loss function or generally any computation that we know how to derive a gradient for.

Each Layer must define a forward() function and a backward() function.

  • The forward() function performs the actual calculation/operation of the block and returns an output.
  • The backward() function computes the gradient of the input and parameters as a function of the gradient of the output, according to the chain rule.

Here's a diagram illustrating the above explanation:

Note that the diagram doesn't show that if the function is parametrized, i.e. $f(\vec{x},\vec{y})=f(\vec{x},\vec{y};\vec{w})$, there are also gradients to calculate for the parameters $\vec{w}$.

The forward pass is straightforward: just do the computation. To understand the backward pass, imagine that there's some "downstream" loss function $L(\vec{\theta})$ and magically somehow we are told the gradient of that loss with respect to the output $\vec{z}$ of our block, i.e. $\pderiv{L}{\vec{z}}$.

Now, since we know how to calculate the derivative of $f(\vec{x},\vec{y};\vec{w})$, it means we know how to calculate $\pderiv{\vec{z}}{\vec{x}}$, $\pderiv{\vec{z}}{\vec{y}}$ and $\pderiv{\vec{z}}{\vec{w}}$ . Thanks to the chain rule, this is all we need to calculate the gradients of the loss w.r.t. the input and parameters:

$$ \begin{align} \pderiv{L}{\vec{x}} &= \pderiv{L}{\vec{z}}\cdot \pderiv{\vec{z}}{\vec{x}}\\ \pderiv{L}{\vec{y}} &= \pderiv{L}{\vec{z}}\cdot \pderiv{\vec{z}}{\vec{y}}\\ \pderiv{L}{\vec{w}} &= \pderiv{L}{\vec{z}}\cdot \pderiv{\vec{z}}{\vec{w}} \end{align} $$

Comparison with PyTorch¶

PyTorch has the nn.Module base class, which may seem to be similar to our Layer since it also represents a computation element in a network. However PyTorch's nn.Modules don't compute the gradient directly, they only define the forward calculations. Instead, PyTorch has a more low-level API for defining a function and explicitly implementing it's forward() and backward(). See autograd.Function. When an operation is performed on a tensor, it creates a Function instance which performs the operation and stores any necessary information for calculating the gradient later on. Additionally, Functionss point to the other Function objects representing the operations performed earlier on the tensor. Thus, a graph (or DAG) of operations is created (this is not 100% exact, as the graph is actually composed of a different type of class which wraps the backward method, but it's accurate enough for our purposes).

A Tensor instance which was created by performing operations on one or more tensors with requires_grad=True, has a grad_fn property which is a Function instance representing the last operation performed to produce this tensor. This exposes the graph of Function instances, each with it's own backward() function. Therefore, in PyTorch the backward() function is called on the tensors, not the modules.

Our Layers are therefore a combination of the ideas in Module and Function and we'll implement them together, just to make things simpler. Our goal here is to create a "poor man's autograd": We'll use PyTorch tensors, but we'll calculate and store the gradients in our Layers (or return them). The gradients we'll calculate are of the entire block, not individual operations on tensors.

To test our implementation, we'll use PyTorch's autograd.

Note that of course this method of tracking gradients is much more limited than what PyTorch offers. However it allows us to implement the backpropagation algorithm very simply and really see how it works.

Let's set up some testing instrumentation:

In [3]:
from hw2.grad_compare import compare_layer_to_torch

def test_block_grad(block: layers.Layer, x, y=None, delta=1e-3):
    diffs = compare_layer_to_torch(block, x, y)
    
    # Assert diff values
    for diff in diffs:
        test.assertLess(diff, delta)

# Show the compare function
compare_layer_to_torch??

Notes:

  • After you complete your implementation, you should make sure to read and understand the compare_layer_to_torch() function. It will help you understand what PyTorch is doing.
  • The value of delta above is should not be needed. A correct implementation will give you a diff of exactly zero.

Layer Implementations¶

We'll now implement some Layers that will enable us to later build an MLP model of arbitrary depth, complete with automatic differentiation.

For each block, you'll first implement the forward() function. Then, you will calculate the derivative of the block by hand with respect to each of its input tensors and each of its parameter tensors (if any). Using your manually-calculated derivation, you can then implement the backward() function.

Notice that we have intermediate Jacobians that are potentially high dimensional tensors. For example in the expression $\pderiv{L}{\vec{w}} = \pderiv{L}{\vec{z}}\cdot \pderiv{\vec{z}}{\vec{w}}$, the term $\pderiv{\vec{z}}{\vec{w}}$ is a 4D Jacobian if both $\vec{z}$ and $\vec{w}$ are 2D matrices.

In order to implement the backpropagation algorithm efficiently, we need to implement every backward function without explicitly constructing this Jacobian. Instead, we're interested in directly calculating the vector-Jacobian product (VJP) $\pderiv{L}{\vec{z}}\cdot \pderiv{\vec{z}}{\vec{w}}$. In order to do this, you should try to figure out the gradient of the loss with respect to one element, e.g. $\pderiv{L}{\vec{w}_{1,1}}$ and extrapolate from there how to directly obtain the VJP.

Activation functions¶

(Leaky) ReLU¶

ReLU, or rectified linear unit is a very common activation function in deep learning architectures. In it's most standard form, as we'll implement here, it has no parameters.

We'll first implement the "leaky" version, defined as

$$ \mathrm{relu}(\vec{x}) = \max(\alpha\vec{x},\vec{x}), \ 0\leq\alpha<1 $$

This is similar to the ReLU activation we've seen in class, only that it has a small non-zero slope then it's input is negative. Note that it's not strictly differentiable, however it has sub-gradients, defined separately any positive-valued input and for negative-valued input.

TODO: Complete the implementation of the LeakyReLU class in the hw2/layers.py module.

In [4]:
N = 100
in_features = 200
num_classes = 10
eps = 1e-6
In [5]:
# Test LeakyReLU
alpha = 0.1
lrelu = layers.LeakyReLU(alpha=alpha)
x_test = torch.randn(N, in_features)

# Test forward pass
z = lrelu(x_test)
test.assertSequenceEqual(z.shape, x_test.shape)
test.assertTrue(torch.allclose(z, torch.nn.LeakyReLU(alpha)(x_test), atol=eps))

# Test backward pass
test_block_grad(lrelu, x_test)
Comparing gradients... 
input    diff=0.000

Now using the LeakyReLU, we can trivially define a regular ReLU block as a special case.

TODO: Complete the implementation of the ReLU class in the hw2/layers.py module.

In [6]:
# Test ReLU
relu = layers.ReLU()
x_test = torch.randn(N, in_features)

# Test forward pass
z = relu(x_test)
test.assertSequenceEqual(z.shape, x_test.shape)
test.assertTrue(torch.allclose(z, torch.relu(x_test), atol=eps))

# Test backward pass
test_block_grad(relu, x_test)
Comparing gradients... 
input    diff=0.000

Sigmoid¶

The sigmoid function $\sigma(x)$ is also sometimes used as an activation function. We have also seen it previously in the context of logistic regression.

The sigmoid function is defined as

$$ \sigma(\vec{x}) = \frac{1}{1+\exp(-\vec{x})}. $$
In [7]:
# Test Sigmoid
sigmoid = layers.Sigmoid()
x_test = torch.randn(N, in_features, in_features) # 3D input should work

# Test forward pass
z = sigmoid(x_test)
test.assertSequenceEqual(z.shape, x_test.shape)
test.assertTrue(torch.allclose(z, torch.sigmoid(x_test), atol=eps))

# Test backward pass
test_block_grad(sigmoid, x_test)
Comparing gradients... 
input    diff=0.000

Hyperbolic Tangent¶

The hyperbolic tangent function $\tanh(x)$ is a common activation function used when the output should be in the range [-1, 1].

The tanh function is defined as

$$ \tanh(\vec{x}) = \frac{\exp(x)-\exp(-x)}{\exp(x)+\exp(-\vec{x})}. $$
In [8]:
# Test TanH
tanh = layers.TanH()
x_test = torch.randn(N, in_features, in_features) # 3D input should work

# Test forward pass
z = tanh(x_test)
test.assertSequenceEqual(z.shape, x_test.shape)
test.assertTrue(torch.allclose(z, torch.tanh(x_test), atol=eps))

# Test backward pass
test_block_grad(tanh, x_test)
Comparing gradients... 
input    diff=0.000

Linear (fully connected) layer¶

First, we'll implement an affine transform layer, also known as a fully connected layer.

Given an input $\mat{X}$ the layer computes,

$$ \mat{Z} = \mat{X} \mattr{W} + \vec{b} ,~ \mat{X}\in\set{R}^{N\times D_{\mathrm{in}}},~ \mat{W}\in\set{R}^{D_{\mathrm{out}}\times D_{\mathrm{in}}},~ \vec{b}\in\set{R}^{D_{\mathrm{out}}}. $$

Notes:

  • We write it this way to follow the implementation conventions.
  • $N$ is the number of samples in the input (batch size). The input $\mat{X}$ will always be a tensor containing a batch dimension first.
  • Thanks to broadcasting, $\vec{b}$ can remain a vector even though the input $\mat{X}$ is a matrix.

TODO: Complete the implementation of the Linear class in the hw2/layers.py module.

In [9]:
a = torch.zeros(20)
print(a.grad)
None
In [10]:
# Test Linear
out_features = 1000
fc = layers.Linear(in_features, out_features)
x_test = torch.randn(N, in_features)

# Test forward pass
z = fc(x_test)
test.assertSequenceEqual(z.shape, [N, out_features])
torch_fc = torch.nn.Linear(in_features, out_features,bias=True)
torch_fc.weight = torch.nn.Parameter(fc.w)
torch_fc.bias = torch.nn.Parameter(fc.b)
test.assertTrue(torch.allclose(torch_fc(x_test), z, atol=eps))
a
# Test backward pass
# for i, (p, dp) in enumerate(fc.params()):
#     dp_ = p.grad
#     print(dp_, dp)
test_block_grad(fc, x_test)

# Test second backward pass
x_test = torch.randn(N, in_features)
z = fc(x_test)
z = fc(x_test)
test_block_grad(fc, x_test)
Comparing gradients... 
input    diff=0.000
param#01 diff=0.000
param#02 diff=0.000
Comparing gradients... 
input    diff=0.000
param#01 diff=0.000
param#02 diff=0.000

Cross-Entropy Loss¶

As you know by know, cross-entropy is a common loss function for classification tasks. In class, we defined it as

$$\ell_{\mathrm{CE}}(\vec{y},\hat{\vec{y}}) = - {\vectr{y}} \log(\hat{\vec{y}})$$

where $\hat{\vec{y}} = \mathrm{softmax}(x)$ is a probability vector (the output of softmax on the class scores $\vec{x}$) and the vector $\vec{y}$ is a 1-hot encoded class label.

However, it's tricky to compute the gradient of softmax, so instead we'll define a version of cross-entropy that produces the exact same output but works directly on the class scores $\vec{x}$.

We can write, $$\begin{align} \ell_{\mathrm{CE}}(\vec{y},\hat{\vec{y}}) &= - {\vectr{y}} \log(\hat{\vec{y}}) = - {\vectr{y}} \log\left(\mathrm{softmax}(\vec{x})\right) \\ &= - {\vectr{y}} \log\left(\frac{e^{\vec{x}}}{\sum_k e^{x_k}}\right) \\ &= - \log\left(\frac{e^{x_y}}{\sum_k e^{x_k}}\right) \\ &= - \left(\log\left(e^{x_y}\right) - \log\left(\sum_k e^{x_k}\right)\right)\\ &= - x_y + \log\left(\sum_k e^{x_k}\right) \end{align}$$

Where the scalar $y$ is the correct class label, so $x_y$ is the correct class score.

Note that this version of cross entropy is also what's provided by PyTorch's nn module.

TODO: Complete the implementation of the CrossEntropyLoss class in the hw2/layers.py module.

In [11]:
# Test CrossEntropy
cross_entropy = layers.CrossEntropyLoss()
scores = torch.randn(N, num_classes)
labels = torch.randint(low=0, high=num_classes, size=(N,), dtype=torch.long)

# Test forward pass
loss = cross_entropy(scores, labels)
expected_loss = torch.nn.functional.cross_entropy(scores, labels)
test.assertLess(torch.abs(expected_loss-loss).item(), 1e-5)
print('loss=', loss.item())

# Test backward pass
test_block_grad(cross_entropy, scores, y=labels)
loss= 2.7283618450164795
Comparing gradients... 
input    diff=0.000

Building Models¶

Now that we have some working Layers, we can build an MLP model of arbitrary depth and compute end-to-end gradients.

First, lets copy an idea from PyTorch and implement our own version of the nn.Sequential Module. This is a Layer which contains other Layers and calls them in sequence. We'll use this to build our MLP model.

TODO: Complete the implementation of the Sequential class in the hw2/layers.py module.

In [12]:
# Test Sequential
# Let's create a long sequence of layers and see
# whether we can compute end-to-end gradients of the whole thing.

seq = layers.Sequential(
    layers.Linear(in_features, 100),
    layers.Linear(100, 200),
    layers.Linear(200, 100),
    layers.ReLU(),
    layers.Linear(100, 500),
    layers.LeakyReLU(alpha=0.01),
    layers.Linear(500, 200),
    layers.ReLU(),
    layers.Linear(200, 500),
    layers.LeakyReLU(alpha=0.1),
    layers.Linear(500, 1),
    layers.Sigmoid(),
)
x_test = torch.randn(N, in_features)

# Test forward pass
z = seq(x_test)
test.assertSequenceEqual(z.shape, [N, 1])

# Test backward pass
test_block_grad(seq, x_test)
Comparing gradients... 
input    diff=0.000
param#01 diff=0.000
param#02 diff=0.000
param#03 diff=0.000
param#04 diff=0.000
param#05 diff=0.000
param#06 diff=0.000
param#07 diff=0.000
param#08 diff=0.000
param#09 diff=0.000
param#10 diff=0.000
param#11 diff=0.000
param#12 diff=0.000
param#13 diff=0.000
param#14 diff=0.000

Now, equipped with a Sequential, all we have to do is create an MLP architecture. We'll define our MLP with the following hyperparameters:

  • Number of input features, $D$.
  • Number of output classes, $C$.
  • Sizes of hidden layers, $h_1,\dots,h_L$.

So the architecture will be:

FC($D$, $h_1$) $\rightarrow$ ReLU $\rightarrow$ FC($h_1$, $h_2$) $\rightarrow$ ReLU $\rightarrow$ $\cdots$ $\rightarrow$ FC($h_{L-1}$, $h_L$) $\rightarrow$ ReLU $\rightarrow$ FC($h_{L}$, $C$)

We'll also create a sequence of the above MLP and a cross-entropy loss, since it's the gradient of the loss that we need in order to train a model.

TODO: Complete the implementation of the MLP class in the hw2/layers.py module. Ignore the dropout parameter for now.

In [13]:
# Create an MLP model
mlp = layers.MLP(in_features, num_classes, hidden_features=[100, 50, 100])
print(mlp)
MLP, Sequential
	[0] Linear(self.in_features=200, self.out_features=100)
	[1] ReLU
	[2] Linear(self.in_features=100, self.out_features=50)
	[3] ReLU
	[4] Linear(self.in_features=50, self.out_features=100)
	[5] ReLU
	[6] Linear(self.in_features=100, self.out_features=10)

In [14]:
# Test MLP architecture
N = 100
in_features = 10
num_classes = 10
for activation in ('relu', 'sigmoid'):
    mlp = layers.MLP(in_features, num_classes, hidden_features=[100, 50, 100], activation=activation)
    test.assertEqual(len(mlp.sequence), 7)
    
    num_linear = 0
    for b1, b2 in zip(mlp.sequence, mlp.sequence[1:]):
        # print(str(b2).lower())
        if (str(b2).lower() == activation):
            test.assertTrue(str(b1).startswith('Linear'))
            num_linear += 1
            
    test.assertTrue(str(mlp.sequence[-1]).startswith('Linear'))
    test.assertEqual(num_linear, 3)

    # Test MLP gradients
    # Test forward pass
    x_test = torch.randn(N, in_features)
    labels = torch.randint(low=0, high=num_classes, size=(N,), dtype=torch.long)
    z = mlp(x_test)
    test.assertSequenceEqual(z.shape, [N, num_classes])

    # Create a sequence of MLPs and CE loss
    seq_mlp = layers.Sequential(mlp, layers.CrossEntropyLoss())
    loss = seq_mlp(x_test, y=labels)
    test.assertEqual(loss.dim(), 0)
    print(f'MLP loss={loss}, activation={activation}')

    # Test backward pass
    test_block_grad(seq_mlp, x_test, y=labels)
MLP loss=2.30924391746521, activation=relu
Comparing gradients... 
input    diff=0.000
param#01 diff=0.000
param#02 diff=0.000
param#03 diff=0.000
param#04 diff=0.000
param#05 diff=0.000
param#06 diff=0.000
param#07 diff=0.000
param#08 diff=0.000
MLP loss=2.3934404850006104, activation=sigmoid
Comparing gradients... 
input    diff=0.000
param#01 diff=0.000
param#02 diff=0.000
param#03 diff=0.000
param#04 diff=0.000
param#05 diff=0.000
param#06 diff=0.000
param#07 diff=0.000
param#08 diff=0.000

If the above tests passed then congratulations - you've now implemented an arbitrarily deep model and loss function with end-to-end automatic differentiation!

Questions¶

TODO Answer the following questions. Write your answers in the appropriate variables in the module hw2/answers.py.

In [15]:
from cs236781.answers import display_answer
import hw2.answers

Question 1¶

Suppose we have a linear (i.e. fully-connected) layer with a weight tensor $\mat{W}$, defined with in_features=1024 and out_features=512. We apply this layer to an input tensor $\mat{X}$ containing a batch of N=64 samples. The output of the layer is denoted as $\mat{Y}$.

  1. Consider the Jacobian tensor $\pderiv{\mat{Y}}{\mat{X}}$ of the output of the layer w.r.t. the input $\mat{X}$.

    1. What is the shape of this tensor?
    2. Is this Jacobian sparse (most elements zero by definition)? If so, why and which elements?
    3. Given the gradient of the output w.r.t. some downstream scalar loss $L$, $\delta\mat{Y}=\pderiv{L}{\mat{Y}}$, do we need to materialize the above Jacobian in order to calculate the downstream gratdient w.r.t. to the input ($\delta\mat{X}$)? If yes, explain why; if no, show how to calcualte it without materializing the Jacobian.
  2. Consider the Jacobian tensor $\pderiv{\mat{Y}}{\mat{W}}$ of the output of the layer w.r.t. the layer weights $\mat{W}$. Answer questions A-C about it as well.

In [16]:
display_answer(hw2.answers.part1_q1)

Your answer:

1.A - the shape of the input is 64X1024. the shape of the output is 64X512. the shape of the Jacobian is therefore 64X512x1024


1.B - the jacobian $\frac{\partial{y}}{\partial{x}}$ is just the weight matrix. since usually the weight matrix is sparse (many weights are zero because of different regularizations during training), the jacobian is sparse.
1.C - using the chain rule we can calculate the Jacobian vector product instead of fully materialize the Jacobian - since the Jacobian $J$ in this case is $W$ and we need to calculate $\delta X = J \delta Y$ we can just multiply the weight matrix with $\delta Y$ to get the result.
2.A - now we need to calcualte $\frac{\partial{y}}{\partial{W}}$ we take the derivative of each of the 512 elements $Y_i$ with respect to all 1024X512 elements $W_{ij}$. so for each sample we have 512X1024X512 elements and in total 64X512X1024X512 elements in the full Jacobian.
2.B - since each element $Y_i$ is a linear combination of the i'th row of $W$, $\frac{\partial{y_i}}{\partial{W_{j,k}}} = 0$ for $i \neq j$. this means that the Jacobian is sparse and is essentially the gradient of each output elements with respect to the corresponding weight row.
2.C - we again don't need to materialize the Jacobian. we can again use the chain rule and multiply the input tensor with $\delta Y$

Question 2¶

Is back-propagation required in order to train neural networks with decent-based optimization? Why or why not?

In [17]:
display_answer(hw2.answers.part1_q2)

Your answer:

backprop is just a way for efficient calculations of gradients and therefore is it not the only way to perform decent-based optimization and it is not required in a decent-based training. for example, we saw in the tutorial that it is possible to use Forward mode AD instead. (there are also other methods. for example - https://arxiv.org/abs/2202.08587). however, the use of chaine-rule, computational graphs and automatic differentiation makes backprop the method that gives the best trade-off of efficiency and accuracy in a scenario of heavy computations and it is the most used optimization method in the field of deep learning


In [ ]:
 
$$ \newcommand{\mat}[1]{\boldsymbol {#1}} \newcommand{\mattr}[1]{\boldsymbol {#1}^\top} \newcommand{\matinv}[1]{\boldsymbol {#1}^{-1}} \newcommand{\vec}[1]{\boldsymbol {#1}} \newcommand{\vectr}[1]{\boldsymbol {#1}^\top} \newcommand{\rvar}[1]{\mathrm {#1}} \newcommand{\rvec}[1]{\boldsymbol{\mathrm{#1}}} \newcommand{\diag}{\mathop{\mathrm {diag}}} \newcommand{\set}[1]{\mathbb {#1}} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\pderiv}[2]{\frac{\partial #1}{\partial #2}} \newcommand{\bb}[1]{\boldsymbol{#1}} $$

Part 2: Optimization and Training¶

In this part we will learn how to implement optimization algorithms for deep networks. Additionally, we'll learn how to write training loops and implement a modular model trainer. We'll use our optimizers and training code to test a few configurations for classifying images with an MLP model.

In [1]:
import os
import numpy as np
import matplotlib.pyplot as plt
import unittest
import torch
import torchvision
import torchvision.transforms as tvtf

%matplotlib inline
%load_ext autoreload
%autoreload 2
In [2]:
seed = 42
plt.rcParams.update({'font.size': 12})
test = unittest.TestCase()

Implementing Optimization Algorithms¶

In the context of deep learning, an optimization algorithm is some method of iteratively updating model parameters so that the loss converges toward some local minimum (which we hope will be good enough).

Gradient descent-based methods are by far the most popular algorithms for optimization of neural network parameters. However the high-dimensional loss-surfaces we encounter in deep learning applications are highly non-convex. They may be riddled with local minima, saddle points, large plateaus and a host of very challenging "terrain" for gradient-based optimization. This gave rise to many different methods of performing the parameter updates based on the loss gradients, aiming to tackle these optimization challenges.

The most basic gradient-based update rule can be written as,

$$ \vec{\theta} \leftarrow \vec{\theta} - \eta \nabla_{\vec{\theta}} L(\vec{\theta}; \mathcal{D}) $$

where $\mathcal{D} = \left\{ (\vec{x}^i, \vec{y}^i) \right\}_{i=1}^{M}$ is our training dataset or part of it. Specifically, if we have in total $N$ training samples, then

  • If $M=N$ this is known as regular gradient descent. If the dataset does not fit in memory the gradient of this loss becomes infeasible to compute.
  • If $M=1$, the loss is computed w.r.t. a single different sample each time. This is known as stochastic gradient descent.
  • If $1<M<N$ this is known as stochastic mini-batch gradient descent. This is the most commonly-used option.

The intuition behind gradient descent is simple: since the gradient of a multivariate function points to the direction of steepest ascent ("uphill"), we move in the opposite direction. A small step size $\eta$ known as the learning rate is required since the gradient can only serve as a first-order linear approximation of the function's behaviour at $\vec{\theta}$ (recall e.g. the Taylor expansion). However in truth our loss surface generally has nontrivial curvature caused by a high order nonlinear dependency on $\vec{\theta}$. Thus taking a large step in the direction of the gradient is actually just as likely to increase the function value.

The idea behind the stochastic versions is that by constantly changing the samples we compute the loss with, we get a dynamic error surface, i.e. it's different for each set of training samples. This is thought to generally improve the optimization since it may help the optimizer get out of flat regions or sharp local minima since these features may disappear in the loss surface of subsequent batches. The image below illustrates this. The different lines are different 1-dimensional losses for different training set-samples.

Deep learning frameworks generally provide implementations of various gradient-based optimization algorithms. Here we'll implement our own optimization module from scratch, this time keeping a similar API to the PyTorch optim package.

We define a base Optimizer class. An optimizer holds a set of parameter tensors (these are the trainable parameters of some model) and maintains internal state. It may be used as follows:

  • After the forward pass has been performed the optimizer's zero_grad() function is invoked to clear the parameter gradients computed by previous iterations.
  • After the backward pass has been performed, and gradients have been calculated for these parameters, the optimizer's step() function is invoked in order to update the value of each parameter based on it's gradient.

The exact method of update is implementation-specific for each optimizer and may depend on its internal state. In addition, adding the regularization penalty to the gradient is handled by the optimizer since it only depends on the parameter values (and not the data).

Here's the API of our Optimizer:

In [3]:
import hw2.optimizers as optimizers
help(optimizers.Optimizer)
Help on class Optimizer in module hw2.optimizers:

class Optimizer(abc.ABC)
 |  Optimizer(params)
 |  
 |  Base class for optimizers.
 |  
 |  Method resolution order:
 |      Optimizer
 |      abc.ABC
 |      builtins.object
 |  
 |  Methods defined here:
 |  
 |  __init__(self, params)
 |      :param params: A sequence of model parameters to optimize. Can be a
 |      list of (param,grad) tuples as returned by the Layers, or a list of
 |      pytorch tensors in which case the grad will be taken from them.
 |  
 |  step(self)
 |      Updates all the registered parameter values based on their gradients.
 |  
 |  zero_grad(self)
 |      Sets the gradient of the optimized parameters to zero (in place).
 |  
 |  ----------------------------------------------------------------------
 |  Readonly properties defined here:
 |  
 |  params
 |      :return: A sequence of parameter tuples, each tuple containing
 |      (param_data, param_grad). The data should be updated in-place
 |      according to the grad.
 |  
 |  ----------------------------------------------------------------------
 |  Data descriptors defined here:
 |  
 |  __dict__
 |      dictionary for instance variables (if defined)
 |  
 |  __weakref__
 |      list of weak references to the object (if defined)
 |  
 |  ----------------------------------------------------------------------
 |  Data and other attributes defined here:
 |  
 |  __abstractmethods__ = frozenset({'step'})

Vanilla SGD with Regularization¶

Let's start by implementing the simplest gradient based optimizer. The update rule will be exacly as stated above, but we'll also add a L2-regularization term to the gradient. Remember that in the loss function, the L2 regularization term is expressed by

$$R(\vec{\theta}) = \frac{1}{2}\lambda||\vec{\theta}||^2_2.$$

TODO: Complete the implementation of the VanillaSGD class in the hw2/optimizers.py module.

In [4]:
# Test VanillaSGD
torch.manual_seed(42)
p = torch.randn(500, 10)
dp = torch.randn(*p.shape)*2
params = [(p, dp)]

vsgd = optimizers.VanillaSGD(params, learn_rate=0.5, reg=0.1)
vsgd.step()

expected_p = torch.load('tests/assets/expected_vsgd.pt')
diff = torch.norm(p-expected_p).item()
print(f'diff={diff}')
test.assertLess(diff, 1e-3)
diff=1.0932822078757454e-06

Training¶

Now that we can build a model and loss function, compute their gradients and we have an optimizer, we can finally do some training!

In the spirit of more modular software design, we'll implement a class that will aid us in automating the repetitive training loop code that we usually write over and over again. This will be useful for both training our Layer-based models and also later for training PyTorch nn.Modules.

Here's our Trainer API:

In [5]:
import hw2.training as training
help(training.Trainer)
Help on class Trainer in module hw2.training:

class Trainer(abc.ABC)
 |  Trainer(model: torch.nn.modules.module.Module, device: Union[torch.device, NoneType] = None)
 |  
 |  A class abstracting the various tasks of training models.
 |  
 |  Provides methods at multiple levels of granularity:
 |  - Multiple epochs (fit)
 |  - Single epoch (train_epoch/test_epoch)
 |  - Single batch (train_batch/test_batch)
 |  
 |  Method resolution order:
 |      Trainer
 |      abc.ABC
 |      builtins.object
 |  
 |  Methods defined here:
 |  
 |  __init__(self, model: torch.nn.modules.module.Module, device: Union[torch.device, NoneType] = None)
 |      Initialize the trainer.
 |      :param model: Instance of the model to train.
 |      :param device: torch.device to run training on (CPU or GPU).
 |  
 |  fit(self, dl_train: torch.utils.data.dataloader.DataLoader, dl_test: torch.utils.data.dataloader.DataLoader, num_epochs: int, checkpoints: str = None, early_stopping: int = None, print_every: int = 1, trial=None, **kw) -> cs236781.train_results.FitResult
 |      Trains the model for multiple epochs with a given training set,
 |      and calculates validation loss over a given validation set.
 |      :param dl_train: Dataloader for the training set.
 |      :param dl_test: Dataloader for the test set.
 |      :param num_epochs: Number of epochs to train for.
 |      :param checkpoints: Whether to save model to file every time the
 |          test set accuracy improves. Should be a string containing a
 |          filename without extension.
 |      :param early_stopping: Whether to stop training early if there is no
 |          test loss improvement for this number of epochs.
 |      :param print_every: Print progress every this number of epochs.
 |      :return: A FitResult object containing train and test losses per epoch.
 |  
 |  save_checkpoint(self, checkpoint_filename: str)
 |      Saves the model in it's current state to a file with the given name (treated
 |      as a relative path).
 |      :param checkpoint_filename: File name or relative path to save to.
 |  
 |  test_batch(self, batch) -> cs236781.train_results.BatchResult
 |      Runs a single batch forward through the model and calculates loss.
 |      :param batch: A single batch of data  from a data loader (might
 |          be a tuple of data and labels or anything else depending on
 |          the underlying dataset.
 |      :return: A BatchResult containing the value of the loss function and
 |          the number of correctly classified samples in the batch.
 |  
 |  test_epoch(self, dl_test: torch.utils.data.dataloader.DataLoader, **kw) -> cs236781.train_results.EpochResult
 |      Evaluate model once over a test set (single epoch).
 |      :param dl_test: DataLoader for the test set.
 |      :param kw: Keyword args supported by _foreach_batch.
 |      :return: An EpochResult for the epoch.
 |  
 |  train_batch(self, batch) -> cs236781.train_results.BatchResult
 |      Runs a single batch forward through the model, calculates loss,
 |      preforms back-propagation and updates weights.
 |      :param batch: A single batch of data  from a data loader (might
 |          be a tuple of data and labels or anything else depending on
 |          the underlying dataset.
 |      :return: A BatchResult containing the value of the loss function and
 |          the number of correctly classified samples in the batch.
 |  
 |  train_epoch(self, dl_train: torch.utils.data.dataloader.DataLoader, **kw) -> cs236781.train_results.EpochResult
 |      Train once over a training set (single epoch).
 |      :param dl_train: DataLoader for the training set.
 |      :param kw: Keyword args supported by _foreach_batch.
 |      :return: An EpochResult for the epoch.
 |  
 |  ----------------------------------------------------------------------
 |  Data descriptors defined here:
 |  
 |  __dict__
 |      dictionary for instance variables (if defined)
 |  
 |  __weakref__
 |      list of weak references to the object (if defined)
 |  
 |  ----------------------------------------------------------------------
 |  Data and other attributes defined here:
 |  
 |  __abstractmethods__ = frozenset({'test_batch', 'train_batch'})

The Trainer class splits the task of training (and evaluating) models into three conceptual levels,

  • Multiple epochs - the fit method, which returns a FitResult containing losses and accuracies for all epochs.
  • Single epoch - the train_epoch and test_epoch methods, which return an EpochResult containing losses per batch and the single accuracy result of the epoch.
  • Single batch - the train_batch and test_batch methods, which return a BatchResult containing a single loss and the number of correctly classified samples in the batch.

It implements the first two levels. Inheriting classes are expected to implement the single-batch level methods since these are model and/or task specific.

The first thing we should do in order to verify our model, gradient calculations and optimizer implementation is to try to overfit a large model (many parameters) to a small dataset (few images). This will show us that things are working properly.

Let's begin by loading the CIFAR-10 dataset.

In [6]:
data_dir = os.path.expanduser('~/.pytorch-datasets')
ds_train = torchvision.datasets.CIFAR10(root=data_dir, download=True, train=True, transform=tvtf.ToTensor())
ds_test = torchvision.datasets.CIFAR10(root=data_dir, download=True, train=False, transform=tvtf.ToTensor())

print(f'Train: {len(ds_train)} samples')
print(f'Test: {len(ds_test)} samples')
Files already downloaded and verified
Files already downloaded and verified
Train: 50000 samples
Test: 10000 samples

Now, let's implement just a small part of our training logic since that's what we need right now.

TODO:

  1. Complete the implementation of the train_batch() method in the LayerTrainer class within the hw2/training.py module.
  2. Update the hyperparameter values in the part2_overfit_hp() function in the hw2/answers.py module. Tweak the hyperparameter values until your model overfits a small number of samples in the code block below. You should get 100% accuracy within a few epochs.

The following code block will use your custom Layer-based MLP implentation, custom Vanilla SGD and custom trainer to overfit the data. The classification accuracy should be 100% within a few epochs.

In [7]:
import hw2.layers as layers
import hw2.answers as answers
from torch.utils.data import DataLoader

# Overfit to a very small dataset of 20 samples
batch_size = 10
max_batches = 2
dl_train = torch.utils.data.DataLoader(ds_train, batch_size, shuffle=False)

# Get hyperparameters
hp = answers.part2_overfit_hp()

torch.manual_seed(seed)

# Build a model and loss using our custom MLP and CE implementations
model = layers.MLP(3*32*32, num_classes=10, hidden_features=[128]*3, wstd=hp['wstd'])
loss_fn = layers.CrossEntropyLoss()

# Use our custom optimizer
optimizer = optimizers.VanillaSGD(model.params(), learn_rate=hp['lr'], reg=hp['reg'])

# Run training over small dataset multiple times
trainer = training.LayerTrainer(model, loss_fn, optimizer)
best_acc = 0
for i in range(20):
    res = trainer.train_epoch(dl_train, max_batches=max_batches)
    best_acc = res.accuracy if res.accuracy > best_acc else best_acc
    
test.assertGreaterEqual(best_acc, 98)

Now that we know training works, let's try to fit a model to a bit more data for a few epochs, to see how well we're doing. First, we need a function to plot the FitResults object.

In [8]:
from cs236781.plot import plot_fit
plot_fit?

TODO:

  1. Complete the implementation of the test_batch() method in the LayerTrainer class within the hw2/training.py module.
  2. Implement the fit() method of the Trainer class within the hw2/training.py module.
  3. Tweak the hyperparameters for this section in the part2_optim_hp() function in the hw2/answers.py module.
  4. Run the following code blocks to train. Try to get above 35-40% test-set accuracy.
In [9]:
# Define a larger part of the CIFAR-10 dataset (still not the whole thing)
batch_size = 50
max_batches = 100
in_features = 3*32*32
num_classes = 10
dl_train = torch.utils.data.DataLoader(ds_train, batch_size, shuffle=False)
dl_test = torch.utils.data.DataLoader(ds_test, batch_size//2, shuffle=False)
In [10]:
# Define a function to train a model with our Trainer and various optimizers
def train_with_optimizer(opt_name, opt_class, fig):
    torch.manual_seed(seed)
    
    # Get hyperparameters
    hp = answers.part2_optim_hp()
    hidden_features = [128] * 5
    num_epochs = 10
    
    # Create model, loss and optimizer instances
    model = layers.MLP(in_features, num_classes, hidden_features, wstd=hp['wstd'])
    loss_fn = layers.CrossEntropyLoss()
    optimizer = opt_class(model.params(), learn_rate=hp[f'lr_{opt_name}'], reg=hp['reg'])

    # Train with the Trainer
    trainer = training.LayerTrainer(model, loss_fn, optimizer)
    fit_res = trainer.fit(dl_train, dl_test, num_epochs, max_batches=max_batches)
    
    fig, axes = plot_fit(fit_res, fig=fig, legend=opt_name)
    return fig
In [11]:
fig_optim = None
fig_optim = train_with_optimizer('vanilla', optimizers.VanillaSGD, fig_optim)
--- EPOCH 1/10 ---
--- EPOCH 2/10 ---
--- EPOCH 3/10 ---
--- EPOCH 4/10 ---
--- EPOCH 5/10 ---
--- EPOCH 6/10 ---
--- EPOCH 7/10 ---
--- EPOCH 8/10 ---
--- EPOCH 9/10 ---
--- EPOCH 10/10 ---
training finished after 9 epochs

Momentum¶

The simple vanilla SGD update is rarely used in practice since it's very slow to converge relative to other optimization algorithms.

One reason is that naïvely updating in the direction of the current gradient causes it to fluctuate wildly in areas where the loss surface in some dimensions is much steeper than in others. Another reason is that using the same learning rate for all parameters is not a great idea since not all parameters are created equal. For example, parameters associated with rare features should be updated with a larger step than ones associated with commonly-occurring features because they'll get less updates through the gradients.

Therefore more advanced optimizers take into account the previous gradients of a parameter and/or try to use a per-parameter specific learning rate instead of a common one.

Let's now implement a simple and common optimizer: SGD with Momentum. This optimizer takes previous gradients of a parameter into account when updating it's value instead of just the current one. In practice it usually provides faster convergence than the vanilla SGD.

The SGD with Momentum update rule can be stated as follows: $$\begin{align} \vec{v}_{t+1} &= \mu \vec{v}_t - \eta \delta \vec{\theta}_t \\ \vec{\theta}_{t+1} &= \vec{\theta}_t + \vec{v}_{t+1} \end{align}$$

Where $\eta$ is the learning rate, $\vec{\theta}$ is a model parameter, $\delta \vec{\theta}_t=\pderiv{L}{\vec{\theta}}(\vec{\theta}_t)$ is the gradient of the loss w.r.t. to the parameter and $0\leq\mu<1$ is a hyperparameter known as momentum.

Expanding the update rule recursively shows us now the parameter update infact depends on all previous gradient values for that parameter, where the old gradients are exponentially decayed by a factor of $\mu$ at each timestep.

Since we're incorporating previous gradient (update directions), a noisy value of the current gradient will have less effect so that the general direction of previous updates is maintained somewhat. The following figure illustrates this.

TODO:

  1. Complete the implementation of the MomentumSGD class in the hw2/optimizers.py module.
  2. Tweak the learning rate for momentum in part2_optim_hp() the function in the hw2/answers.py module.
  3. Run the following code block to compare to the vanilla SGD.
In [12]:
def objective(trial):
    lr = trial.suggest_float("lr_momnetum",1e-5,1e-1)
    reg = trial.suggest_float("reg", 1e-5,1e-1)
    wstd = trial.suggest_float("wstd", 1e-5,1e-1)
    model = layers.MLP(in_features, num_classes, hidden_features, wstd=wstd)
    loss_fn = layers.CrossEntropyLoss()
    optimizer = optimizers.MomentumSGD(model.params(), learn_rate=lr, reg=reg)

    # Train with the Trainer
    trainer = training.LayerTrainer(model, loss_fn, optimizer)
    fit_res = trainer.fit(dl_train, dl_test, num_epochs=5, max_batches=max_batches, verbose=False)
    if trial is not None and trial.should_prune():
                raise optuna.exceptions.TrialPruned()
    return fit_res.test_acc[-1]
In [13]:
# import optuna
# study = optuna.create_study(study_name='sgd_momnetum', storage=f'sqlite:///sgd_momnetum.db', direction='maximize')
In [14]:
# study.optimize(objective, n_trials=50)
In [15]:
# study.best_params
In [16]:
fig_optim = train_with_optimizer('momentum', optimizers.MomentumSGD, fig_optim)
fig_optim
--- EPOCH 1/10 ---
--- EPOCH 2/10 ---
--- EPOCH 3/10 ---
--- EPOCH 4/10 ---
--- EPOCH 5/10 ---
--- EPOCH 6/10 ---
--- EPOCH 7/10 ---
--- EPOCH 8/10 ---
--- EPOCH 9/10 ---
--- EPOCH 10/10 ---
training finished after 9 epochs
Out[16]:

Bonus: RMSProp¶

This is another optmizer that accounts for previous gradients, but this time it uses them to adapt the learning rate per parameter.

RMSProp maintains a decaying moving average of previous squared gradients, $$ \vec{r}_{t+1} = \gamma\vec{r}_{t} + (1-\gamma)\delta\vec{\theta}_t^2 $$ where $0<\gamma<1$ is a decay constant usually set close to $1$, and $\delta\vec{\theta}_t^2$ denotes element-wise squaring.

The update rule for each parameter is then, $$ \vec{\theta}_{t+1} = \vec{\theta}_t - \left( \frac{\eta}{\sqrt{r_{t+1}+\varepsilon}} \right) \delta\vec{\theta}_t $$

where $\varepsilon$ is a small constant to prevent numerical instability. The idea here is to decrease the learning rate for parameters with high gradient values and vice-versa. The decaying moving average prevents accumulating all the past gradients which would cause the effective learning rate to become zero.

Bonus:

  1. Complete the implementation of the RMSProp class in the hw2/optimizers.py module.
  2. Tweak the learning rate for RMSProp in part2_optim_hp() the function in the hw2/answers.py module.
  3. Run the following code block to compare to the other optimizers.
In [17]:
fig_optim = train_with_optimizer('rmsprop', optimizers.RMSProp, fig_optim)
fig_optim
--- EPOCH 1/10 ---
--- EPOCH 2/10 ---
--- EPOCH 3/10 ---
--- EPOCH 4/10 ---
--- EPOCH 5/10 ---
--- EPOCH 6/10 ---
--- EPOCH 7/10 ---
--- EPOCH 8/10 ---
--- EPOCH 9/10 ---
--- EPOCH 10/10 ---
training finished after 9 epochs
Out[17]:

Note that you should get better train/test accuracy with Momentum and RMSProp than Vanilla.

Dropout Regularization¶

Dropout is a useful technique to improve generalization of deep models.

The idea is simple: during the forward pass drop, i.e. set to to zero, the activation of each neuron, with a probability of $p$. For example, if $p=0.4$ this means we drop the activations of 40% of the neurons (on average).

There are a few important things to note about dropout:

  1. It is only performed during training. When testing our model the dropout layers should be a no-op.
  2. In the backward pass, gradients are only propagated back into neurons that weren't dropped during the forward pass.
  3. During testing, the activations must be scaled since the expected value of each neuron during the training phase is now $1-p$ times it's original expectation. Thus, we need to scale the test-time activations by $1-p$ to match. Equivalently, we can scale the train time activations by $1/(1-p)$.

TODO:

  1. Complete the implementation of the Dropout class in the hw2/layers.py module.
  2. Finish the implementation of the MLP's __init__() method in the hw2/layers.py module. If dropout>0 you should add a Dropout layer after each ReLU.
In [18]:
from hw2.grad_compare import compare_layer_to_torch

# Check architecture of MLP with dropout layers
mlp_dropout = layers.MLP(in_features, num_classes, [50]*3, dropout=0.6)
print(mlp_dropout)
test.assertEqual(len(mlp_dropout.sequence), 10)
for b1, b2 in zip(mlp_dropout.sequence, mlp_dropout.sequence[1:]):
    if str(b1).lower() == 'relu':
        test.assertTrue(str(b2).startswith('Dropout'))
test.assertTrue(str(mlp_dropout.sequence[-1]).startswith('Linear'))
MLP, Sequential
	[0] Linear(self.in_features=3072, self.out_features=50)
	[1] ReLU
	[2] Dropout(p=0.6)
	[3] Linear(self.in_features=50, self.out_features=50)
	[4] ReLU
	[5] Dropout(p=0.6)
	[6] Linear(self.in_features=50, self.out_features=50)
	[7] ReLU
	[8] Dropout(p=0.6)
	[9] Linear(self.in_features=50, self.out_features=10)

In [19]:
# Test end-to-end gradient in train and test modes.
print('Dropout, train mode')
mlp_dropout.train(True)
for diff in compare_layer_to_torch(mlp_dropout, torch.randn(500, in_features)):
    test.assertLess(diff, 1e-3)
    
print('Dropout, test mode')
mlp_dropout.train(False)
for diff in compare_layer_to_torch(mlp_dropout, torch.randn(500, in_features)):
    test.assertLess(diff, 1e-3)
Dropout, train mode
Comparing gradients... 
input    diff=0.000
param#01 diff=0.000
param#02 diff=0.000
param#03 diff=0.000
param#04 diff=0.000
param#05 diff=0.000
param#06 diff=0.000
param#07 diff=0.000
param#08 diff=0.000
Dropout, test mode
Comparing gradients... 
input    diff=0.000
param#01 diff=0.000
param#02 diff=0.000
param#03 diff=0.000
param#04 diff=0.000
param#05 diff=0.000
param#06 diff=0.000
param#07 diff=0.000
param#08 diff=0.000

To see whether dropout really improves generalization, let's take a small training set (small enough to overfit) and a large test set and check whether we get less overfitting and perhaps improved test-set accuracy when using dropout.

In [20]:
# Define a small set from CIFAR-10, but take a larger test set since we want to test generalization
batch_size = 10
max_batches = 40
in_features = 3*32*32
num_classes = 10
dl_train = torch.utils.data.DataLoader(ds_train, batch_size, shuffle=False)
dl_test = torch.utils.data.DataLoader(ds_test, batch_size*2, shuffle=False)

TODO: Tweak the hyperparameters for this section in the part2_dropout_hp() function in the hw2/answers.py module. Try to set them so that the first model (with dropout=0) overfits. You can disable the other dropout options until you tune the hyperparameters. We can then see the effect of dropout for generalization.

In [21]:
# Get hyperparameters
hp = answers.part2_dropout_hp()
hidden_features = [400] * 1
num_epochs = 30
In [22]:
torch.manual_seed(seed)
fig=None
#for dropout in [0]:  # Use this for tuning the hyperparms until you overfit
for dropout in [0,0.4,0.8]:
    model = layers.MLP(in_features, num_classes, hidden_features, wstd=hp['wstd'], dropout=dropout)
    loss_fn = layers.CrossEntropyLoss()
    optimizer = optimizers.MomentumSGD(model.params(), learn_rate=hp['lr'], reg=0)

    print('*** Training with dropout=', dropout)
    trainer = training.LayerTrainer(model, loss_fn, optimizer)
    fit_res_dropout = trainer.fit(dl_train, dl_test, num_epochs, max_batches=max_batches, print_every=6)
    fig, axes = plot_fit(fit_res_dropout, fig=fig, legend=f'dropout={dropout}', log_loss=True)
*** Training with dropout= 0
--- EPOCH 1/30 ---
--- EPOCH 7/30 ---
--- EPOCH 13/30 ---
--- EPOCH 19/30 ---
--- EPOCH 25/30 ---
--- EPOCH 30/30 ---
training finished after 29 epochs
*** Training with dropout= 0.4
--- EPOCH 1/30 ---
--- EPOCH 7/30 ---
--- EPOCH 13/30 ---
--- EPOCH 19/30 ---
--- EPOCH 25/30 ---
--- EPOCH 30/30 ---
training finished after 29 epochs
*** Training with dropout= 0.8
--- EPOCH 1/30 ---
--- EPOCH 7/30 ---
--- EPOCH 13/30 ---
--- EPOCH 19/30 ---
--- EPOCH 25/30 ---
--- EPOCH 30/30 ---
training finished after 29 epochs

Questions¶

TODO Answer the following questions. Write your answers in the appropriate variables in the module hw2/answers.py.

In [23]:
from cs236781.answers import display_answer
import hw2.answers

Question 1¶

Regarding the graphs you got for the three dropout configurations:

  1. Explain the graphs of no-dropout vs dropout. Do they match what you expected to see?

    • If yes, explain why and provide examples based on the graphs.
    • If no, explain what you think the problem is and what should be modified to fix it.
  2. Compare the low-dropout setting to the high-dropout setting and explain based on your graphs.

In [24]:
display_answer(hw2.answers.part2_q1)

Your answer:

  1. The difference between the graph without dropout and the graphs with dropout corresponds to what we expected: since dropout is a kind of regularization, it is understood that without dropout we will get the best performance for the training set, but they will not necessarily reflect the performance for the test set, which can suffer from overfitting. At the same time, too strong regularization can also hurt the performance for the training set, because it limits the freedom of the model. In this case, too strong dropout will prevent the different neurons from dividing "roles" between them and learning a complex pattern.

    Indeed, we see, as expected, that the no-dropout performance is best for the training set (and deteriorates as the dropout increases), while for the test set the change is more complex and non-monotonic: the performance for the test set improves slightly with moderate dropout, but at least for the measure of accuracy deteriorates for too high dropout.

    As an example, we can see very clearly how the gap between the loss graph of dropout=0 and the loss graph of dropout=0.4 is getting wider throughout the training both for the trust set and for the test set, but in opposite directions: for the training set the loss of dropout=0.4 is higher, and for the test set the loss of dropout=0 is higher.

  2. For the training set, the expected pattern was obtained: low-dropout led to better performance according to both indices than high-dropout setting - because dropout is a regularization that limits the model.

    But for the test set a more complex pattern was obtained: the loss of low-dropout was initially lower but it gradually increased and became higher than the loss of high-dropout (which decreased moderately); But the accuracy of low-dropout remains higher than that of high-dropout throughout the entire training process. It is also surprising to note that the loss of low-dropout increased slightly while its accuracy also increased. According to our interpretation, this pattern reflects the fact that a low-dropout model becomes more equal between examples, i.e. it achieved correct prediction but with low confidence for many examples, and it becomes more equal throughout training, while the high-dropout model achieved superior performances (correct and with high confidence) on a few examples, and in the rest of the examples it was wrong.

    This result was probably caused by the fact that too high a dropout prevented the neurons from dividing different roles between them and thus prevented them from adapting to a variety of samples over time.

Question 2¶

When training a model with the cross-entropy loss function, is it possible for the test loss to increase for a few epochs while the test accuracy also increases?

If it's possible explain how, if it's not explain why not.

In [25]:
display_answer(hw2.answers.part2_q2)

Your answer: Yes, it is possible, because the accuracy for each example is boolean while the loss is continuous. So, for example, it is possible that for one example in the dataset the value of $\hat{y}$ of the correct $y$ will increase by infinitesimal, smallest as needed for it to be the maximum, while for all the other examples the $\hat{y}$ of the correct $y$ will decrease a lot but still stay the maximum. In this case, the loss will increase while the accuracy will increase too - because only the predicted label of one example change, and it change to the correct one.

It cam happen in cases where the model improves but the variance of the values in the outbut is reduced - so the scores of the correct labels are gatting closer to the smallest value that needed to be the predicted labels.

Question 3¶

  1. Explain the difference between gradient descent and back-propagation.

  2. Compare in detail between gradient descent (GD) and stochastic gradient descent (SGD).

  3. Why is SGD used more often in the practice of deep learning? Provide a few justifications.

  4. You would like to try GD to train your model instead of SGD, but you're concerned that your dataset won't fit in memory. A friend suggested that you should split the data into disjoint batches, do multiple forward passes until all data is exhausted, and then do one backward pass on the sum of the losses.

    1. Would this approach produce a gradient equivalent to GD? Why or why not? provide mathematical justification for your answer.
    2. You implemented the suggested approach, and were careful to use batch sizes small enough so that each batch fits in memory. However, after some number of batches you got an out of memory error. What happened?
In [26]:
display_answer(hw2.answers.part2_q3)

Your answer:

  1. backpropagation is the algorithm that calculates the gradients of all the parameters in the model. Gradient Descent is the algorithm that updates the parameters of the model according to their gradients, and for this purpose it can use the backpropagation algorithm, in the case of a neural network.

  2. In GD, the algorithm uses each iteration in the entire training set $X$, and calculates the error and gradients based on it. In contrast, in SGD, the algorithm in each iteration randomly selects a one example $x \in X$, and calculates the error and the gradients based on it.

  3. First SGD is has lower time/space complaxcity than GD, becaus it calculate the gradient only for one example each time.

Second, since SGD considers a different example at random each time, it can get out of "traps": local minima points that GD cannot get out of.

    1. Yes, it is:

$\frac{\partial L_1}{\partial \Theta} + \frac{\partial L_2}{\partial \Theta} + ... + \frac{\partial L_k}{\partial \Theta} = \frac{\partial (L_1 + L_2 + ... + L_k)}{\partial \Theta} = \frac{\partial (\sum_{x \in X_1}L(x) + \sum_{x \in X_2}L(x) + ... + \sum_{x \in X_k}L(x) )}{\partial \Theta} = \frac{\partial \sum_{x \in X} L(x)}{\partial \Theta} = \frac{\partial L}{\partial \Theta}$ As needed. 2. The computer still need to save the losses of all the examples, to compute the gradient at the end.

Question 4 (Automatic Differentiation)¶

Let $f = f_n \circ f_{n-1} \circ ... \circ f_1$ where each $f_i: \mathbb{R} \rightarrow \mathbb{R}$ is a differentiable function which is easy to evaluate and differentiate (each query costs $\mathcal{O}(1)$ at a given point).

  1. In this exercise you will reduce the memory complexity for evaluating $\nabla f (x_0)$ at some point $x_0$.

Assume that you are given with $f$ already expressed as a computational graph and a point $x_0$. 1. Show how to reduce the memory complexity for computing the gradient using forward mode AD (maintaining the $\mathcal{O}(n)$ computation cost). What is the memory complexity? 2. Show how to reduce the memory complexity for computing the gradient using backward mode AD (maintaining the $\mathcal{O}(n)$ computation cost). What is the memory complexity? 2. Can these techniques be generalized for arbitrary computational graphs? 3. Think how the backprop algorithm can benefit from these techniques when applied to deep architectures (e.g VGGs, ResNets).

In [27]:
display_answer(hw2.answers.part2_q4)

A. pdeudocode:

$lastResult \gets X$

$lastGrad \gets \vec{1}$

For $i \gets 1$ to $n$:

$\quad lastGrad \gets lastGrad \cdot f_{i}.derivative(lastResults)$

$\quad lastResults \gets f_{i}(lastResults)$

EndFor

return $lastGrad$

End

memory complexity = $O(1)$

B. pdeudocode:

$results = []$

$reslts[0] \gets X$

For $i \gets 1$ to $n$:

$\quad results[i] \gets f_{i}(results[i-1])$

EndFor

$lastGrad \gets \vec{1}$

For $i \gets n-1$ downto $0$:

$\quad lastGrad \gets lastGrad \cdot f_{i+1}.derivative(results[i])$

EndFor

return $lastGrad$

End

If we asume that the results are already given, then:

memory complexity = $O(1)$

else:

memory complexity = $O(n)$

  1. This technique is based on the assumption that all functions are executed in a queue and not in parallel. It is this assumption that allows us to remember at any moment only a fixed number of gradients. In particular, there is only one input. So, in principle, if we use this technique when there are functions that are executed in parallel, we are not guaranteed to be able to run this algorithm in $O(1)$ memory complexity. However, we can sometimes save memory to some extent.

  2. This technique can help us in cases where it is not necessary to remember all the gradients. This can happen, for example, when we only perform finetuning for a limited number of layers, without updating all the other layers. Alternatively, we can use an algorithm to update each layer separately with low memory complexity - but the time complexity in this case would be very high.

$$ \newcommand{\mat}[1]{\boldsymbol {#1}} \newcommand{\mattr}[1]{\boldsymbol {#1}^\top} \newcommand{\matinv}[1]{\boldsymbol {#1}^{-1}} \newcommand{\vec}[1]{\boldsymbol {#1}} \newcommand{\vectr}[1]{\boldsymbol {#1}^\top} \newcommand{\rvar}[1]{\mathrm {#1}} \newcommand{\rvec}[1]{\boldsymbol{\mathrm{#1}}} \newcommand{\diag}{\mathop{\mathrm {diag}}} \newcommand{\set}[1]{\mathbb {#1}} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\pderiv}[2]{\frac{\partial #1}{\partial #2}} \newcommand{\bb}[1]{\boldsymbol{#1}} $$

Part 3: Binary Classification with Multilayer Perceptrons¶

In this part we'll implement a general purpose MLP and Binary Classifier using pytorch. We'll implement its training, and also learn about decision boundaries an threshold selection in the context of binary classification. Finally, we'll explore the effect of depth and width on an MLP's performance.

In [1]:
import os
import re
import sys
import glob
import unittest
from typing import Sequence, Tuple

import sklearn
import numpy as np
import matplotlib.pyplot as plt
import torch
import torchvision
import torch.nn as nn
import torchvision.transforms as tvtf
from torch import Tensor

%matplotlib inline
%load_ext autoreload
%autoreload 2
In [2]:
seed = 42
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
plt.rcParams.update({'font.size': 12})
test = unittest.TestCase()

Synthetic Dataset¶

To test our first neural network-based classifiers we'll start by creating a toy binary classification dataset, but one which is not trivial for a linear model.

In [3]:
from sklearn.datasets import make_moons
from sklearn.model_selection import train_test_split
In [4]:
def rotate_2d(X, deg=0):
    """
    Rotates each 2d sample in X of shape (N, 2) by deg degrees.
    """
    a = np.deg2rad(deg)
    return X @ np.array([[np.cos(a), -np.sin(a)],[np.sin(a), np.cos(a)]]).T

def plot_dataset_2d(X, y, n_classes=2, alpha=0.2, figsize=(8, 6), title=None, ax=None):
    if ax is None:
        fig, ax = plt.subplots(1, 1, figsize=figsize)
    for c in range(n_classes):
        ax.scatter(*X[y==c,:].T, alpha=alpha, label=f"class {c}");
        
    ax.set_xlabel("$x_1$"); ax.set_ylabel("$x_2$");
    ax.legend(); ax.set_title((title or '') + f" (n={len(y)})")

We'll split our data into 80% train and validation, and 20% test. To make it a bit more challenging, we'll simulate a somewhat real-world setting where there are multiple populations, and the training/validation data is not sampled iid from the underlying data distribution.

In [5]:
np.random.seed(seed)

N = 10_000
N_train = int(N * .8)

# Create data from two different distributions for the training/validation
X1, y1 = make_moons(n_samples=N_train//2, noise=0.2)
X1 = rotate_2d(X1, deg=10)
X2, y2 = make_moons(n_samples=N_train//2, noise=0.25)
X2 = rotate_2d(X2, deg=50)

# Test data comes from a similar but noisier distribution
X3, y3 = make_moons(n_samples=(N-N_train), noise=0.3)
X3 = rotate_2d(X3, deg=40)

X, y = np.vstack([X1, X2, X3]), np.hstack([y1, y2, y3])
In [6]:
# Train and validation data is from mixture distribution
X_train, X_valid, y_train, y_valid = train_test_split(X[:N_train, :], y[:N_train], test_size=1/3, shuffle=False)

# Test data is only from the second distribution
X_test, y_test = X[N_train:, :], y[N_train:]

fig, ax = plt.subplots(1, 3, figsize=(20, 5))
plot_dataset_2d(X_train, y_train, title='Train', ax=ax[0]);
plot_dataset_2d(X_valid, y_valid, title='Validation', ax=ax[1]);
plot_dataset_2d(X_test, y_test, title='Test', ax=ax[2]);

Now let us create a data loader for each dataset.

In [7]:
from torch.utils.data import TensorDataset
from torch.utils.data import DataLoader

batch_size = 32

dl_train, dl_valid, dl_test = [
    DataLoader(
        dataset=TensorDataset(
            torch.from_numpy(X_).to(torch.float32),
            torch.from_numpy(y_)
        ),
        shuffle=True,
        num_workers=0,
        batch_size=batch_size
    )
    for X_, y_ in [(X_train, y_train), (X_valid, y_valid), (X_test, y_test)]
]

print(f'{len(dl_train.dataset)=}, {len(dl_valid.dataset)=}, {len(dl_test.dataset)=}')
len(dl_train.dataset)=5333, len(dl_valid.dataset)=2667, len(dl_test.dataset)=2000

Simple MLP¶

A multilayer-perceptron is arguably a the most basic type of neural network model. It is composed of $L$ layers, each layer $l$ with $n_l$ perceptron ("neuron") units. Each perceptron is connected to all ouputs of the previous layer (or all inputs in the first layer), calculates their weighted sum, applies a linearity and produces a single output.

Each layer $l$ operates on the output of the previous layer ($\vec{y}_{l-1}$) and calculates:

$$ \vec{y}_l = \varphi\left( \mat{W}_l \vec{y}_{l-1} + \vec{b}_l \right),~ \mat{W}_l\in\set{R}^{n_{l}\times n_{l-1}},~ \vec{b}_l\in\set{R}^{n_l},~ l \in \{1,2,\dots,L\}. $$
  • Note that both input and output are vectors. We can think of the above equation as describing a layer of multiple perceptrons.
  • We'll henceforth refer to such layers as fully-connected or FC layers.
  • The first layer accepts the input of the model, i.e. $\vec{y}_0=\vec{x}\in\set{R}^d$.
  • The last layer, $L$, is the output layer, so $y_L$ is the output of the model.
  • The layers $1, 2, \dots, L-1$ are called hidden layers.

To begin, let's implement a general multi-layer perceptron model. We'll seek to implement it in a way which is both general in terms of architecture, and also composable so that we can use our MLP in the context of larger models.

TODO: Implement the MLP class in the hw2/mlp.py module.

In [8]:
from hw2.mlp import MLP

mlp = MLP(
    in_dim=2,
    dims=[8, 16, 32, 64],
    nonlins=['relu', 'tanh', nn.LeakyReLU(0.314), 'softmax']
)
mlp
Out[8]:
MLP(
  (layers): ModuleList(
    (0): Linear(in_features=2, out_features=8, bias=True)
    (1): ReLU()
    (2): Linear(in_features=8, out_features=16, bias=True)
    (3): Tanh()
    (4): Linear(in_features=16, out_features=32, bias=True)
    (5): LeakyReLU(negative_slope=0.314)
    (6): Linear(in_features=32, out_features=64, bias=True)
    (7): Softmax(dim=None)
  )
)

Let's try our implementation on a batch of data.

In [9]:
x0, y0 = next(iter(dl_train))

yhat0 = mlp(x0)

test.assertEqual(len([*mlp.parameters()]), 8)
test.assertEqual(yhat0.shape, (batch_size, mlp.out_dim))
test.assertTrue(torch.allclose(torch.sum(yhat0, dim=1), torch.tensor(1.0)))
test.assertIsNotNone(yhat0.grad_fn)

yhat0
/home/ilay.kamai/cs236781-hw2/hw2/mlp.py:77: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
  x = l(x)
Out[9]:
tensor([[0.0154, 0.0122, 0.0150,  ..., 0.0165, 0.0186, 0.0141],
        [0.0165, 0.0122, 0.0150,  ..., 0.0165, 0.0190, 0.0147],
        [0.0175, 0.0120, 0.0146,  ..., 0.0159, 0.0175, 0.0153],
        ...,
        [0.0160, 0.0121, 0.0151,  ..., 0.0164, 0.0187, 0.0145],
        [0.0175, 0.0120, 0.0148,  ..., 0.0161, 0.0179, 0.0154],
        [0.0155, 0.0124, 0.0148,  ..., 0.0166, 0.0187, 0.0139]],
       grad_fn=<SoftmaxBackward0>)

MLP for Binary Classification¶

The MLP model we've implemented, while useful, is very general. For the task of binary classification, we would like to add some additional functionality to it: the ability to output a normalized score for a sample being in class one (which we interpret as a probability) and a prediction based on some threshold of this probability. In addition, we need some way to calculate a meaningful threshold based on the data and a trained model at hand.

In order to maintain generality, we'll add this functionlity in the form of a wrapper: A BinaryClassifier class that can wrap any model producing two output features, and provide the the functionality stated above.

TODO: In the hw2/classifier.py module, implement the BinaryClassifier and the missing parts of its base class, Classifier. Read the method documentation carefully and implement accordingly. You can ignore the roc_threshold method at this stage.

In [10]:
from hw2.classifier import BinaryClassifier

bmlp4 = BinaryClassifier(
    model=MLP(in_dim=2, dims=[*[10]*3, 2], nonlins=[*['relu']*3, 'none']),
    threshold=0.5
)
print(bmlp4)

# Test model
test.assertEqual(len([*bmlp4.parameters()]), 8)
test.assertIsNotNone(bmlp4(x0).grad_fn)

# Test forward
yhat0_scores = bmlp4(x0)
test.assertEqual(yhat0_scores.shape, (batch_size, 2))
test.assertFalse(torch.allclose(torch.sum(yhat0_scores, dim=1), torch.tensor(1.0)))

# Test predict_proba
yhat0_proba = bmlp4.predict_proba(x0)
test.assertEqual(yhat0_proba.shape, (batch_size, 2))
test.assertTrue(torch.allclose(torch.sum(yhat0_proba, dim=1), torch.tensor(1.0)))

# Test classify
yhat0 = bmlp4.classify(x0)
test.assertEqual(yhat0.shape, (batch_size,))
test.assertEqual(yhat0.dtype, torch.int)
test.assertTrue(all(yh_ in (0, 1) for yh_ in yhat0))
BinaryClassifier(
  (model): MLP(
    (layers): ModuleList(
      (0): Linear(in_features=2, out_features=10, bias=True)
      (1): ReLU()
      (2): Linear(in_features=10, out_features=10, bias=True)
      (3): ReLU()
      (4): Linear(in_features=10, out_features=10, bias=True)
      (5): ReLU()
      (6): Linear(in_features=10, out_features=2, bias=True)
      (7): Identity()
    )
  )
  (head): Softmax(dim=1)
)

Training¶

Now that we have a classifier, we need to train it. We will abstract the various aspects of training such as mlutiple epochs, iterating over batches, early stopping and saving model checkpoints, into a Trainer that will take care of these concerns.

The Trainer class splits the task of training (and evaluating) models into three conceptual levels,

  • Multiple epochs - the fit method, which returns a FitResult containing losses and accuracies for all epochs.
  • Single epoch - the train_epoch and test_epoch methods, which return an EpochResult containing losses per batch and the single accuracy result of the epoch.
  • Single batch - the train_batch and test_batch methods, which return a BatchResult containing a single loss and the number of correctly classified samples in the batch.

It implements the first two levels. Inheriting classes are expected to implement the single-batch level methods since these are model and/or task specific.

TODO:

  1. Implement the Trainer's fit method and the ClassifierTrainer's train_batch/test_batch methods, in the hw2/training.py module. You may ignore the Optional parts about early stopping an model checkpoints at this stage.

  2. Set the model's architecture hyper-parameters and the optimizer hyperparameters in part3_arch_hp() and part3_optim_hp(), respectively, in hw2/answers.py.

Since this is a toy dataset, you should be able to quickly get above 85% accuracy even on the test set.

In [11]:
from hw2.training import ClassifierTrainer
from hw2.answers import part3_arch_hp, part3_optim_hp

torch.manual_seed(seed)

hp_arch = part3_arch_hp()
hp_optim = part3_optim_hp()

model = BinaryClassifier(
    model=MLP(
        in_dim=2,
        dims=[*[hp_arch['hidden_dims'],]*hp_arch['n_layers'], 2],
        nonlins=[*[hp_arch['activation'],]*hp_arch['n_layers'], hp_arch['out_activation']]
    ),
    threshold=0.5,
)
print(model)

loss_fn = hp_optim.pop('loss_fn')
optimizer = torch.optim.SGD(params=model.parameters(), **hp_optim)
trainer = ClassifierTrainer(model, loss_fn, optimizer)

fit_result = trainer.fit(dl_train, dl_valid, num_epochs=20, print_every=10);


test.assertGreaterEqual(fit_result.train_acc[-1], 85.0)
test.assertGreaterEqual(fit_result.test_acc[-1], 75.0)
BinaryClassifier(
  (model): MLP(
    (layers): ModuleList(
      (0): Linear(in_features=2, out_features=1024, bias=True)
      (1): ReLU()
      (2): Linear(in_features=1024, out_features=1024, bias=True)
      (3): ReLU()
      (4): Linear(in_features=1024, out_features=2, bias=True)
      (5): Softmax(dim=None)
    )
  )
  (head): Softmax(dim=1)
)
--- EPOCH 1/20 ---
train_batch:   0%|          | 0/167 [00:00<?, ?it/s]
test_batch:   0%|          | 0/84 [00:00<?, ?it/s]
train_batch:   0%|          | 0/167 [00:00<?, ?it/s]
test_batch:   0%|          | 0/84 [00:00<?, ?it/s]
train_batch:   0%|          | 0/167 [00:00<?, ?it/s]
test_batch:   0%|          | 0/84 [00:00<?, ?it/s]
train_batch:   0%|          | 0/167 [00:00<?, ?it/s]
test_batch:   0%|          | 0/84 [00:00<?, ?it/s]
train_batch:   0%|          | 0/167 [00:00<?, ?it/s]
test_batch:   0%|          | 0/84 [00:00<?, ?it/s]
train_batch:   0%|          | 0/167 [00:00<?, ?it/s]
test_batch:   0%|          | 0/84 [00:00<?, ?it/s]
train_batch:   0%|          | 0/167 [00:00<?, ?it/s]
test_batch:   0%|          | 0/84 [00:00<?, ?it/s]
train_batch:   0%|          | 0/167 [00:00<?, ?it/s]
test_batch:   0%|          | 0/84 [00:00<?, ?it/s]
train_batch:   0%|          | 0/167 [00:00<?, ?it/s]
test_batch:   0%|          | 0/84 [00:00<?, ?it/s]
train_batch:   0%|          | 0/167 [00:00<?, ?it/s]
test_batch:   0%|          | 0/84 [00:00<?, ?it/s]
--- EPOCH 11/20 ---
train_batch:   0%|          | 0/167 [00:00<?, ?it/s]
test_batch:   0%|          | 0/84 [00:00<?, ?it/s]
train_batch:   0%|          | 0/167 [00:00<?, ?it/s]
test_batch:   0%|          | 0/84 [00:00<?, ?it/s]
train_batch:   0%|          | 0/167 [00:00<?, ?it/s]
test_batch:   0%|          | 0/84 [00:00<?, ?it/s]
train_batch:   0%|          | 0/167 [00:00<?, ?it/s]
test_batch:   0%|          | 0/84 [00:00<?, ?it/s]
train_batch:   0%|          | 0/167 [00:00<?, ?it/s]
test_batch:   0%|          | 0/84 [00:00<?, ?it/s]
train_batch:   0%|          | 0/167 [00:00<?, ?it/s]
test_batch:   0%|          | 0/84 [00:00<?, ?it/s]
train_batch:   0%|          | 0/167 [00:00<?, ?it/s]
test_batch:   0%|          | 0/84 [00:00<?, ?it/s]
train_batch:   0%|          | 0/167 [00:00<?, ?it/s]
test_batch:   0%|          | 0/84 [00:00<?, ?it/s]
train_batch:   0%|          | 0/167 [00:00<?, ?it/s]
test_batch:   0%|          | 0/84 [00:00<?, ?it/s]
--- EPOCH 20/20 ---
train_batch:   0%|          | 0/167 [00:00<?, ?it/s]
test_batch:   0%|          | 0/84 [00:00<?, ?it/s]
training finished after 19 epochs
In [12]:
from cs236781.plot import plot_fit

plot_fit(fit_result, log_loss=False, train_test_overlay=True);

Decision Boundary¶

An important part of understanding what a non-linear classifier like our MLP is doing is visualizing it's decision boundaries. When we only have two input features, these are relatively simple to visualize, since we can simply plot our data on the plane, and evaluate our classifier on a constant 2D grid in order to approximate the decision boundary.

TODO: Implement the plot_decision_boundary_2d function in the hw2/classifier.py module.

In [13]:
from hw2.classifier import plot_decision_boundary_2d

fig, ax = plot_decision_boundary_2d(model, *dl_valid.dataset.tensors)
/home/ilay.kamai/mambaforge/envs/cs236781-hw/lib/python3.8/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at  /opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/TensorShape.cpp:2157.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]

Threshold Selection¶

Another important component, especially in the context of binary classification is threshold selection. Until now, we arbitrarily chose a threshold of 0.5 when deciding the class label based on the probability score we calculated via softmax. In other words, we classified a sample to class 1 (the 'positive' class) when it's probability score was greater or equal to 0.5.

However, in real-world classifiction problems we'll need to choose our threshold wisely based on the domain-specific requirements of the problem. For example, depending on our application, we might care more about high sensitivity (correctly classifying positive examples), while for other applications specificity (correctly classifying negative examples) is more important.

One way to understand the mistakes a model is making is to look at its Confusion Matrix. From it, we easily see e.g. the false-negative rate (FNR) and false-positive rate (FPR).

Let's look at the confusion matrices on the test and validation data using the model we trained above.

In [14]:
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay

def plot_confusion(classifier, x: np.ndarray, y: np.ndarray, ax=None):
    y_hat = classifier.classify(torch.from_numpy(x).to(torch.float32)).numpy()
    conf_mat = confusion_matrix(y, y_hat, normalize='all')
    ConfusionMatrixDisplay(conf_mat).plot(ax=ax, colorbar=False)
    
model.threshold = 0.5

_, axes = plt.subplots(1, 2, figsize=(10, 5))
axes[0].set_title("Train"); axes[1].set_title("Validation");
plot_confusion(model, X_train, y_train, ax=axes[0])
plot_confusion(model, X_valid, y_valid, ax=axes[1])
/home/ilay.kamai/cs236781-hw2/hw2/mlp.py:77: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
  x = l(x)

We can see that the model makes a different number of false-posiive and false-negative errors. Clearly, this proportion would change if the classification threshold was different.

A very common way to select the classification threshold is to find a threshold which optimally balances between the FPR and FNR. This can be done by plotting the model's ROC curve, which shows 1-FNR vs. FPR for multiple threshold values, and selecting the point closest to the ideal point ((0, 1)).

TODO: Implement the select_roc_thresh function in the hw2.classifier module.

In [15]:
from hw2.classifier import select_roc_thresh


optimal_thresh = select_roc_thresh(model, *dl_valid.dataset.tensors, plot=True)
/home/ilay.kamai/cs236781-hw2/hw2/mlp.py:77: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
  x = l(x)

Let's see the effect of our threshold selection on the confusion matrix and decision boundary.

In [16]:
model.threshold = optimal_thresh

_, axes = plt.subplots(1, 2, figsize=(10, 5))
axes[0].set_title("Train"); axes[1].set_title("Validation");
plot_confusion(model, X_train, y_train, ax=axes[0])
plot_confusion(model, X_valid, y_valid, ax=axes[1])
fig, ax = plot_decision_boundary_2d(model, *dl_valid.dataset.tensors)
/home/ilay.kamai/cs236781-hw2/hw2/mlp.py:77: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
  x = l(x)
/home/ilay.kamai/cs236781-hw2/hw2/mlp.py:77: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
  x = l(x)
/home/ilay.kamai/cs236781-hw2/hw2/mlp.py:77: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
  x = l(x)

Architecture Experiments¶

Now, equipped with the tools we've implemented so far we'll expertiment with various MLP architectures. We'll seek to study the effect of the models depth (number of hidden layers) and width (number of neurons per hidden layer) on the its decision boundaries and the resulting performance. After training, we will use the validation set for threshold selection, and seek to maximize the performance on the test set.

TODO: Implement the mlp_experiment function in hw2/experiments.py. You are free to configure any model and optimization hyperparameters however you like, except for the specified width and depth. Experiment with various options for these other hyperparameters and try to obtain the best results you can.

In [17]:
from itertools import product
from tqdm.auto import tqdm
from hw2.experiments import mlp_experiment

torch.manual_seed(seed)

depths = [1, 2, 4]
widths = [2, 8, 32]
exp_configs = product(enumerate(widths), enumerate(depths))
fig, axes = plt.subplots(len(widths), len(depths), figsize=(10*len(depths), 10*len(widths)), squeeze=False)
test_accs = []

for (i, width), (j, depth) in tqdm(list(exp_configs)):
    print(f"***experiment,  depth: {depth}, width: {width}***")
    model, thresh, valid_acc, test_acc = mlp_experiment(
        depth, width, dl_train, dl_valid, dl_test, n_epochs=10
    )
    print(f"results: thresh {thresh}, valid_acc {valid_acc}, test_acc {test_acc}")
    test_accs.append(test_acc)
    fig, ax = plot_decision_boundary_2d(model, *dl_test.dataset.tensors, ax=axes[i, j])
    ax.set_title(f"{depth=}, {width=}")
    ax.text(ax.get_xlim()[0]*.95, ax.get_ylim()[1]*.95, f"{thresh=:.2f}\n{valid_acc=:.1f}%\n{test_acc=:.1f}%", va="top") 
# Assert minimal performance requirements.
# You should be able to do better than these by at least 5%.
test.assertGreaterEqual(np.min(test_accs), 75.0)
test.assertGreaterEqual(np.quantile(test_accs, 0.75), 85.0)
  0%|          | 0/9 [00:00<?, ?it/s]
***experiment,  depth: 1, width: 2***
training finished after 9 epochs
test_batch:   0%|          | 0/63 [00:00<?, ?it/s]
results: thresh 0.389316201210022, valid_acc 89.27634045744281, test_acc 90.05
***experiment,  depth: 2, width: 2***
/home/ilay.kamai/cs236781-hw2/hw2/mlp.py:77: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
  x = l(x)
early stopping!
training finished after 7 epochs
test_batch:   0%|          | 0/63 [00:00<?, ?it/s]
results: thresh 0.2892194092273712, valid_acc 90.9636295463067, test_acc 89.85
***experiment,  depth: 4, width: 2***
/home/ilay.kamai/cs236781-hw2/hw2/mlp.py:77: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
  x = l(x)
training finished after 9 epochs
test_batch:   0%|          | 0/63 [00:00<?, ?it/s]
results: thresh 0.269757479429245, valid_acc 91.93850768653918, test_acc 90.15
***experiment,  depth: 1, width: 8***
/home/ilay.kamai/cs236781-hw2/hw2/mlp.py:77: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
  x = l(x)
early stopping!
training finished after 7 epochs
test_batch:   0%|          | 0/63 [00:00<?, ?it/s]
results: thresh 0.2927970886230469, valid_acc 85.11436070491189, test_acc 86.45
***experiment,  depth: 2, width: 8***
/home/ilay.kamai/cs236781-hw2/hw2/mlp.py:77: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
  x = l(x)
early stopping!
training finished after 7 epochs
test_batch:   0%|          | 0/63 [00:00<?, ?it/s]
results: thresh 0.29290106892585754, valid_acc 89.38882639670041, test_acc 87.85
***experiment,  depth: 4, width: 8***
/home/ilay.kamai/cs236781-hw2/hw2/mlp.py:77: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
  x = l(x)
early stopping!
training finished after 4 epochs
test_batch:   0%|          | 0/63 [00:00<?, ?it/s]
results: thresh 0.27033981680870056, valid_acc 93.32583427071616, test_acc 90.85
***experiment,  depth: 1, width: 32***
/home/ilay.kamai/cs236781-hw2/hw2/mlp.py:77: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
  x = l(x)
early stopping!
training finished after 8 epochs
test_batch:   0%|          | 0/63 [00:00<?, ?it/s]
results: thresh 0.30386391282081604, valid_acc 90.21372328458942, test_acc 88.35
***experiment,  depth: 2, width: 32***
/home/ilay.kamai/cs236781-hw2/hw2/mlp.py:77: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
  x = l(x)
early stopping!
training finished after 4 epochs
test_batch:   0%|          | 0/63 [00:00<?, ?it/s]
results: thresh 0.31758973002433777, valid_acc 92.38845144356955, test_acc 89.85
***experiment,  depth: 4, width: 32***
/home/ilay.kamai/cs236781-hw2/hw2/mlp.py:77: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
  x = l(x)
early stopping!
training finished after 5 epochs
test_batch:   0%|          | 0/63 [00:00<?, ?it/s]
results: thresh 0.2883946895599365, valid_acc 92.72590926134234, test_acc 91.5
/home/ilay.kamai/cs236781-hw2/hw2/mlp.py:77: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
  x = l(x)

Questions¶

TODO Answer the following questions. Write your answers in the appropriate variables in the module hw2/answers.py.

In [18]:
from cs236781.answers import display_answer
import hw2.answers

Question 1¶

Consider the first binary classifier you trained in this notebook and the loss/accuracy curves we plotted for it on the train and validation sets, as well as the decision boundary plot.

Based on those plots, explain qualitatively whether or now your model has:

  1. High Optimization error?
  2. High Generalization error?
  3. High Approximation error?

Explain your answers for each of the above. Since this is a qualitative question, assume "high" simply means "I would take measures in order to decrease it further".

In [19]:
display_answer(hw2.answers.part3_q1)

Your answer:

  1. given the loss graph, the optimization error is not high. this is because we see the training loss decrease smoothly until it reaches a plateau. that implies that the at the end of the optimization proccess the gradients are small and therefore the optimization error is small.
  2. looking at the test loss we conclude that the genralization error os a little bit high. compare to the train loss , the test loss is much more noisy, and don't have a good "decrease" shape. although there is no overfitting (the test loss is not raising), we can say that the generalization error is higher than the optimization error.
  3. looking at the decision boundry plot, we can say that the approximation error is not high. the model is able to create the non linear shape that separates the classes and therefore it is able to approximate the real boundary of the dataset.

Question 2¶

Consider the first binary classifier you trained in this notebook and the confusion matrices we plotted for it.

For the validation dataset, would you expect the FPR or the FNR to be higher, and why? Recall that you have full knowledge of the data generating process.

In [20]:
display_answer(hw2.answers.part3_q2)

Your answer:

for the model we trained at the beginning of the notebook, we expect the validation set to have more FNR than FPR. this is because we can see that the model's decision boundary is over-estimating the area of class 0 (regions that the model marked as class 0 and has points from class 1) i.e the probabilty for the model to classify sample with label 1 as 0 is higher than the opposite case (classifiying sample with label 0 as 1). since those mistakes are False negatives we assuming that the FNR would be higher than FPR. In general, if you know exactily how the data was generated, we can estimate the FNR and FPR based on the model decision boundary.

Question 3¶

You're training a binary classifier screening of a large cohort of patients for some disease, with the aim to detect the disease early, before any symptoms appear. You train the model on easy-to-obtain features, so screening each individual patient is simple and low-cost. In case the model classifies a patient as sick, she must then be sent to furhter testing in order to confirm the illness. Assume that these further tests are expensive and involve high-risk to the patient. Assume also that once diagnosed, a low-cost treatment exists.

You wish to screen as many people as possible at the lowest possible cost and loss of life. Would you still choose the same "optimal" point on the ROC curve as above? If not, how would you choose it? Answer these questions for two possible scenarios:

  1. A person with the disease will develop non-lethal symptoms that immediately confirm the diagnosis and can then be treated.
  2. A person with the disease shows no clear symptoms and may die with high probability if not diagnosed early enough, either by your model or by the expensive test.

Explain your answers.

In [21]:
display_answer(hw2.answers.part3_q3)

Your answer:

the "optimal" point on the ROC curve may not be the best choice in this situation, as the "naive" ROC curve do not take into account the costs and risks associated with false positive and false negative classifications.

In scenario 1, where the disease leads to non-lethal symptoms, the cost of a false positive classification (i.e. diagnosing a healthy patient as sick) is high , as it results in a unnecessary expensive and risky confirmation test. However, the cost of a false negative classification (i.e. failing to diagnose a sick patient) is low, as the symptoms are easy to detect and not dangerous. Therefore, in this scenario, it may be better to choose a classification threshold that maximizes sensitivity (i.e. minimizing false positive) even if it comes at the cost of increased false negative.

In scenario 2, where the disease may lead to high risk of death if not diagnosed early, the cost of a false negative classification is very high, as it may result in delayed treatment and death. Therefore, it may be better to choose a classification threshold that maximizes specificity (i.e. minimizing false positives) even if it comes at the cost of increased false negatives.

Question 4¶

Analyze your results from the Architecture Experiment.

  1. Explain the decision boundaries and model performance you obtained for the columns (fixed depth, width varies).
  2. Explain the decision boundaries and model performance you obtained for the rows (fixed width, depth varies).
  3. Compare and explain the results for the following pair of configurations, which have the same number of total parameters:
    • depth=1, width=32 and depth=4, width=8
  4. Explain the effect of threshold selection on the validation set: did it improve the results on the test set? why?
In [22]:
display_answer(hw2.answers.part3_q4)

Your answer:

  1. analyzing the results by column, we see that for depth=1 the widht that gave the dest results is the lowest one (width=2), for depth=2, the lowest (width=2) and the highest (width=8) gave the same best results and for depth=4 the best one was the model with the highest width (8). only for the last column (depth=4) we see consistency in which increasing the width leads to better results. this implies that for shallow networks, adding more parameters doese not necceseraly improve the performence.
  2. for fixed width and varied depth we see much more consistency - always the best model was the one with the highest number of layers and except one case (width=2) increasing the number of layers always leads to increasing the test accuracy. this implies that adding more layers is more efficient in capturing complex features than adding more paramters (width) for a specific layer.
  3. depth=4, width=8 had better results then depth=1 width=32. this is another proof to what have been written before - for a fixed number of total parameters, adding more layers is better than adding width for less layers. the idea behind it is that adding more layers incerase the non-linearity of the model while adding width gives a better approximation per layer. we can think of each layer as a linear function followed by non linearity. increasing the number of layers, meaning increasing the number of linear functions, which gives more expressivense than fine-tunning the linear approximation of each layer.
  4. the optimal thresould did not improved the reuslts on the test set compared to the validation set. the reason for that is that the optimal threshold can be sensitive to specific dataset. choosing the optimal thershold based on the validation set wouldn't necceseraly be optimal for the test set. it might be that the test set samples distribute differently and that the (true) optimal threshold for the test set would be different.
In [ ]:
 
$$ \newcommand{\mat}[1]{\boldsymbol {#1}} \newcommand{\mattr}[1]{\boldsymbol {#1}^\top} \newcommand{\matinv}[1]{\boldsymbol {#1}^{-1}} \newcommand{\vec}[1]{\boldsymbol {#1}} \newcommand{\vectr}[1]{\boldsymbol {#1}^\top} \newcommand{\rvar}[1]{\mathrm {#1}} \newcommand{\rvec}[1]{\boldsymbol{\mathrm{#1}}} \newcommand{\diag}{\mathop{\mathrm {diag}}} \newcommand{\set}[1]{\mathbb {#1}} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\pderiv}[2]{\frac{\partial #1}{\partial #2}} \newcommand{\bb}[1]{\boldsymbol{#1}} $$

Part 4: Convolutional Neural Networks¶

In this part we will explore convolution networks. We'll implement a common block-based deep CNN pattern with an without residual connections.

In [1]:
import os
import re
import sys
import glob
import numpy as np
import matplotlib.pyplot as plt
import unittest
import torch
import torchvision
import torchvision.transforms as tvtf

%matplotlib inline
%load_ext autoreload
%autoreload 2
In [2]:
seed = 42
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
plt.rcParams.update({'font.size': 12})
test = unittest.TestCase()

Reminder: Convolutional layers and networks¶

Convolutional layers are the most essential building blocks of the state of the art deep learning image classification models and also play an important role in many other tasks. As we saw in the tutorial, when applied to images, convolutional layers operate on and produce volumes (3D tensors) of activations.

A convenient way to interpret convolutional layers for images is as a collection of 3D learnable filters, each of which operates on a small spatial region of the input volume. Each filter is convolved with the input volume ("slides over it"), and a dot product is computed at each location followed by a non-linearity which produces one activation. All these activations produce a 2D plane known as a feature map. Multiple feature maps (one for each filter) comprise the output volume.

A crucial property of convolutional layers is their translation equivariance, i.e. shifting the input results in and equivalently shifted output. This produces the ability to detect features regardless of their spatial location in the input.

Convolutional network architectures usually follow a pattern basic repeating blocks: one or more convolution layers, each followed by a non-linearity (generally ReLU) and then a pooling layer to reduce spatial dimensions. Usually, the number of convolutional filters increases the deeper they are in the network. These layers are meant to extract features from the input. Then, one or more fully-connected layers is used to combine the extracted features into the required number of output class scores.

Building convolutional networks with PyTorch¶

PyTorch provides all the basic building blocks needed for creating a convolutional arcitecture within the torch.nn package. Let's use them to create a basic convolutional network with the following architecture pattern:

[(CONV -> ACT)*P -> POOL]*(N/P) -> (FC -> ACT)*M -> FC

Here $N$ is the total number of convolutional layers, $P$ specifies how many convolutions to perform before each pooling layer and $M$ specifies the number of hidden fully-connected layers before the final output layer.

TODO: Complete the implementaion of the CNN class in the hw2/cnn.py module. Use PyTorch's nn.Conv2d and nn.MaxPool2d for the convolution and pooling layers. It's recommended to implement the missing functionality in the order of the class' methods.

In [3]:
from hw2.cnn import CNN

test_params = [
    dict(
        in_size=(3,100,100), out_classes=10,
        channels=[32]*4, pool_every=2, hidden_dims=[100]*2,
        conv_params=dict(kernel_size=3, stride=1, padding=1),
        activation_type='relu', activation_params=dict(),
        pooling_type='max', pooling_params=dict(kernel_size=2),
    ),
    dict(
        in_size=(3,100,100), out_classes=10,
        channels=[32]*4, pool_every=2, hidden_dims=[100]*2,
        conv_params=dict(kernel_size=5, stride=2, padding=3),
        activation_type='lrelu', activation_params=dict(negative_slope=0.05),
        pooling_type='avg', pooling_params=dict(kernel_size=3),
    ),
    dict(
        in_size=(3,100,100), out_classes=3,
        channels=[16]*5, pool_every=3, hidden_dims=[100]*1,
        conv_params=dict(kernel_size=2, stride=2, padding=2),
        activation_type='lrelu', activation_params=dict(negative_slope=0.1),
        pooling_type='max', pooling_params=dict(kernel_size=2),
    ),
]

for i, params in enumerate(test_params):
    torch.manual_seed(seed)
    net = CNN(**params)
    print(f"\n=== test {i=} ===")
    print(net)

    torch.manual_seed(seed)
    test_out = net(torch.ones(1, 3, 100, 100))
    print(f'{test_out=}')

    expected_out = torch.load(f'tests/assets/expected_conv_out_{i:02d}.pt')
    print(f'max_diff={torch.max(torch.abs(test_out - expected_out)).item()}')
    test.assertTrue(torch.allclose(test_out, expected_out, atol=1e-3))
=== test i=0 ===
CNN(
  (feature_extractor): Sequential(
    (0): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): ReLU()
    (2): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (3): ReLU()
    (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (5): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (6): ReLU()
    (7): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (8): ReLU()
    (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  )
  (mlp): MLP(
    (layers): ModuleList(
      (0): Linear(in_features=20000, out_features=100, bias=True)
      (1): ReLU()
      (2): Linear(in_features=100, out_features=100, bias=True)
      (3): ReLU()
      (4): Linear(in_features=100, out_features=10, bias=True)
      (5): Identity()
    )
  )
)
test_out=tensor([[ 0.0745, -0.1058,  0.0928,  0.0476,  0.0057,  0.0051,  0.0938, -0.0582,
          0.0573,  0.0583]], grad_fn=<AddmmBackward0>)
max_diff=7.450580596923828e-09

=== test i=1 ===
CNN(
  (feature_extractor): Sequential(
    (0): Conv2d(3, 32, kernel_size=(5, 5), stride=(2, 2), padding=(3, 3))
    (1): LeakyReLU(negative_slope=0.05)
    (2): Conv2d(32, 32, kernel_size=(5, 5), stride=(2, 2), padding=(3, 3))
    (3): LeakyReLU(negative_slope=0.05)
    (4): AvgPool2d(kernel_size=3, stride=3, padding=0)
    (5): Conv2d(32, 32, kernel_size=(5, 5), stride=(2, 2), padding=(3, 3))
    (6): LeakyReLU(negative_slope=0.05)
    (7): Conv2d(32, 32, kernel_size=(5, 5), stride=(2, 2), padding=(3, 3))
    (8): LeakyReLU(negative_slope=0.05)
    (9): AvgPool2d(kernel_size=3, stride=3, padding=0)
  )
  (mlp): MLP(
    (layers): ModuleList(
      (0): Linear(in_features=32, out_features=100, bias=True)
      (1): LeakyReLU(negative_slope=0.05)
      (2): Linear(in_features=100, out_features=100, bias=True)
      (3): LeakyReLU(negative_slope=0.05)
      (4): Linear(in_features=100, out_features=10, bias=True)
      (5): Identity()
    )
  )
)
test_out=tensor([[ 0.0724, -0.0030,  0.0637, -0.0073,  0.0932, -0.0662, -0.0656,  0.0076,
          0.0193,  0.0241]], grad_fn=<AddmmBackward0>)
max_diff=0.0

=== test i=2 ===
CNN(
  (feature_extractor): Sequential(
    (0): Conv2d(3, 16, kernel_size=(2, 2), stride=(2, 2), padding=(2, 2))
    (1): LeakyReLU(negative_slope=0.1)
    (2): Conv2d(16, 16, kernel_size=(2, 2), stride=(2, 2), padding=(2, 2))
    (3): LeakyReLU(negative_slope=0.1)
    (4): Conv2d(16, 16, kernel_size=(2, 2), stride=(2, 2), padding=(2, 2))
    (5): LeakyReLU(negative_slope=0.1)
    (6): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (7): Conv2d(16, 16, kernel_size=(2, 2), stride=(2, 2), padding=(2, 2))
    (8): LeakyReLU(negative_slope=0.1)
    (9): Conv2d(16, 16, kernel_size=(2, 2), stride=(2, 2), padding=(2, 2))
    (10): LeakyReLU(negative_slope=0.1)
  )
  (mlp): MLP(
    (layers): ModuleList(
      (0): Linear(in_features=400, out_features=100, bias=True)
      (1): LeakyReLU(negative_slope=0.1)
      (2): Linear(in_features=100, out_features=3, bias=True)
      (3): Identity()
    )
  )
)
test_out=tensor([[-0.0004, -0.0094,  0.0817]], grad_fn=<AddmmBackward0>)
max_diff=0.0

As before, we'll wrap our model with a Classifier that provides the necessary functionality for calculating probability scores and obtaining class label predictions. This time, we'll use a simple approach that simply selects the class with the highest score.

TODO: Implement the ArgMaxClassifier in the hw2/classifier.py module.

In [4]:
from hw2.classifier import ArgMaxClassifier

model = ArgMaxClassifier(model=CNN(**test_params[0]))

test_image = torch.randint(low=0, high=256, size=(3, 100, 100), dtype=torch.float).unsqueeze(0)
test.assertEqual(model.classify(test_image).shape, (1,))
test.assertEqual(model.predict_proba(test_image).shape, (1, 10))
test.assertAlmostEqual(torch.sum(model.predict_proba(test_image)).item(), 1.0, delta=1e-3)

Let's now load CIFAR-10 to use as our dataset.

In [5]:
data_dir = os.path.expanduser('~/.pytorch-datasets')
ds_train = torchvision.datasets.CIFAR10(root=data_dir, download=True, train=True, transform=tvtf.ToTensor())
ds_test = torchvision.datasets.CIFAR10(root=data_dir, download=True, train=False, transform=tvtf.ToTensor())

print(f'Train: {len(ds_train)} samples')
print(f'Test: {len(ds_test)} samples')

x0,_ = ds_train[0]
in_size = x0.shape
num_classes = 10
print('input image size =', in_size)
Files already downloaded and verified
Files already downloaded and verified
Train: 50000 samples
Test: 10000 samples
input image size = torch.Size([3, 32, 32])

Now as usual, as a sanity test let's make sure we can overfit a tiny dataset with our model. But first we need to adapt our Trainer for PyTorch models.

TODO:

  1. Complete the implementaion of the ClassifierTrainer class in the hw2/training.py module if you haven't done so already.
  2. Set the optimizer hyperparameters in part4_optim_hp(), respectively, in hw2/answers.py.
In [6]:
from hw2.training import ClassifierTrainer
from hw2.answers import part4_optim_hp

torch.manual_seed(seed)

# Define a tiny part of the CIFAR-10 dataset to overfit it
batch_size = 2
max_batches = 25
dl_train = torch.utils.data.DataLoader(ds_train, batch_size, shuffle=False)

# Create model, loss and optimizer instances
model = ArgMaxClassifier(
    model=CNN(
        in_size, num_classes, channels=[32], pool_every=1, hidden_dims=[100],
        conv_params=dict(kernel_size=3, stride=1, padding=1),
        pooling_params=dict(kernel_size=2),
    )
)

hp_optim = part4_optim_hp()
loss_fn = hp_optim.pop('loss_fn')
optimizer = torch.optim.SGD(params=model.parameters(), **hp_optim)

# Use ClassifierTrainer to run only the training loop a few times.
trainer = ClassifierTrainer(model, loss_fn, optimizer, device)
best_acc = 0
for i in range(25):
    res = trainer.train_epoch(dl_train, max_batches=max_batches, verbose=(i%5==0))
    best_acc = res.accuracy if res.accuracy > best_acc else best_acc
    
# Test overfitting
test.assertGreaterEqual(best_acc, 90)
train_batch:   0%|          | 0/25 [00:00<?, ?it/s]
train_batch:   0%|          | 0/25 [00:00<?, ?it/s]
train_batch:   0%|          | 0/25 [00:00<?, ?it/s]
train_batch:   0%|          | 0/25 [00:00<?, ?it/s]
train_batch:   0%|          | 0/25 [00:00<?, ?it/s]

Residual Networks¶

A very common addition to the basic convolutional architecture described above are shortcut connections. First proposed by He et al. (2016), this simple addition has been shown to be a crucial ingredient in order to achieve effective learning with very deep networks. Virtually all state of the art image classification models from recent years use this technique.

The idea is to add a shortcut, or skip, around every two or more convolutional layers:

On the left we see an example of a regular Residual Block, that takes a 64 channel input, and performs two 3X3 convolutions , which are added to the original input.
On the right we see an exapmle of a Bottleneck Residual Block, that takes a 256 channel input, projects it to a 64 channel tensor with a 1X1 convolution, then performs an inner 3X3 convolution, followd by another 1X1 projection convolution back to the original numer of channels, 256. The output is then added to the original input.

Overall, we can denote the structure of the bottleneck channels in the given example as 256->64->64->256, where the first and last arrows denote the 1X1 convolutions, and the middle arrow is the inner convolution. Note that the 1X1 convolution with the default parameters (in pytorch) is defined such that the only dimension of the tensor that changes is the number of channels.

This adds an easy way for the network to learn identity mappings: set the weight values to be very small. The outcome is that the convolutional layers learn a residual mapping, i.e. some delta that is applied to the identity map, instead of actually learning a completely new mapping from scratch.

Lets start by implementing a general residual block, representing a structure similar to the above diagrams. Our residual block will be composed of:

  • A "main path" with some number of convolutional layers with ReLU between them. Optionally, we'll also apply dropout and batch normalization layers (in this order) between the convolutions, before the ReLU.
  • A "shortcut path" implementing an identity mapping around the main path. In case of a different number of input/output channels, the shortcut path should contain an additional 1x1 convolution to project the channel dimension.
  • The sum of the main and shortcut paths output is passed though a ReLU and returned.

TODO: Complete the implementation of the ResidualBlock's __init__() method in the hw2/cnn.py module.

In [7]:
from hw2.cnn import ResidualBlock

torch.manual_seed(seed)

resblock = ResidualBlock(
    in_channels=3, channels=[6, 4]*2, kernel_sizes=[3, 5]*2,
    batchnorm=True, dropout=0.2
)
print(resblock)

torch.manual_seed(seed)
test_out = resblock(torch.ones(1, 3, 32, 32))
print(f'{test_out.shape=}')

expected_out = torch.load('tests/assets/expected_resblock_out.pt')
print(f'max_diff={torch.max(torch.abs(test_out - expected_out)).item()}')
test.assertTrue(torch.allclose(test_out, expected_out, atol=1e-3))
ResidualBlock(
  (main_path): Sequential(
    (0): Conv2d(3, 6, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): Dropout2d(p=0.2, inplace=False)
    (2): BatchNorm2d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (3): ReLU()
    (4): Conv2d(6, 4, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
    (5): Dropout2d(p=0.2, inplace=False)
    (6): BatchNorm2d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (7): ReLU()
    (8): Conv2d(4, 6, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (9): Dropout2d(p=0.2, inplace=False)
    (10): BatchNorm2d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (11): ReLU()
    (12): Conv2d(6, 4, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
  )
  (shortcut_path): Sequential(
    (0): Identity()
    (1): Conv2d(3, 4, kernel_size=(1, 1), stride=(1, 1), bias=False)
  )
)
test_out.shape=torch.Size([1, 4, 32, 32])
max_diff=5.960464477539062e-07

Bottleneck Blocks¶

In the ResNet Block diagram shown above, the right block is called a bottleneck block. This type of block is mainly used deep in the network, where the feature space becomes increasingly high-dimensional (i.e. there are many channels).

Instead of applying a KxK conv layer on the original input channels, a bottleneck block first projects to a lower number of features (channels), applies the KxK conv on the result, and then projects back to the original feature space. Both projections are performed with 1x1 convolutions.

TODO: Complete the implementation of the ResidualBottleneckBlock in the hw2/cnn.py module.

In [8]:
from hw2.cnn import ResidualBottleneckBlock

torch.manual_seed(seed)
resblock_bn = ResidualBottleneckBlock(
    in_out_channels=256, inner_channels=[64, 32, 64], inner_kernel_sizes=[3, 5, 3],
    batchnorm=False, dropout=0.1, activation_type="lrelu"
)
print(resblock_bn)

# Test a forward pass
torch.manual_seed(seed)
test_in  = torch.ones(1, 256, 32, 32)
test_out = resblock_bn(test_in)
print(f'{test_out.shape=}')
assert test_out.shape == test_in.shape 

expected_out = torch.load('tests/assets/expected_resblock_bn_out.pt')
print(f'max_diff={torch.max(torch.abs(test_out - expected_out)).item()}')
test.assertTrue(torch.allclose(test_out, expected_out, atol=1e-3))
ResidualBottleneckBlock(
  (main_path): Sequential(
    (0): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1))
    (1): Dropout2d(p=0.1, inplace=False)
    (2): LeakyReLU(negative_slope=0.01)
    (3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (4): Dropout2d(p=0.1, inplace=False)
    (5): LeakyReLU(negative_slope=0.01)
    (6): Conv2d(64, 32, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
    (7): Dropout2d(p=0.1, inplace=False)
    (8): LeakyReLU(negative_slope=0.01)
    (9): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (10): Dropout2d(p=0.1, inplace=False)
    (11): LeakyReLU(negative_slope=0.01)
    (12): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1))
  )
  (shortcut_path): Sequential(
    (0): Identity()
  )
)
test_out.shape=torch.Size([1, 256, 32, 32])
max_diff=1.1920928955078125e-07

Now, based on the ResidualBlock, we'll implement our own variation of a residual network (ResNet), with the following architecture:

[-> (CONV -> ACT)*P -> POOL]*(N/P) -> (FC -> ACT)*M -> FC
 \------- SKIP ------/

Note that $N$, $P$ and $M$ are as before, however now $P$ also controls the number of convolutional layers to add a skip-connection to.

TODO: Complete the implementation of the ResNet class in the hw2/cnn.py module. You must use your ResidualBlocks or ResidualBottleneckBlocks to group together every $P$ convolutional layers.

In [9]:
12%10
Out[9]:
2
In [10]:
from hw2.cnn import ResNet

test_params = [
    dict(
        in_size=(3,100,100), out_classes=10, channels=[32, 64]*3,
        pool_every=4, hidden_dims=[100]*2,
        activation_type='lrelu', activation_params=dict(negative_slope=0.01),
        pooling_type='avg', pooling_params=dict(kernel_size=2),
        batchnorm=True, dropout=0.1,
        bottleneck=False
    ),
    dict(
        # create 64->16->64 bottlenecks
        in_size=(3,100,100), out_classes=5, channels=[64, 16, 64]*4,
        pool_every=3, hidden_dims=[64]*1,
        activation_type='tanh',
        pooling_type='max', pooling_params=dict(kernel_size=2),
        batchnorm=True, dropout=0.1,
        bottleneck=True
    )
]

for i, params in enumerate(test_params):
    torch.manual_seed(seed)
    net = ResNet(**params)
    print(f"\n=== test {i=} ===")
    print(net)

    torch.manual_seed(seed)
    test_out = net(torch.ones(1, 3, 100, 100))
    print(f'{test_out=}')
    
    expected_out = torch.load(f'tests/assets/expected_resnet_out_{i:02d}.pt')
    print(f'{expected_out=}')
    print(f'max_diff={torch.max(torch.abs(test_out - expected_out)).item()}')
    test.assertTrue(torch.allclose(test_out, expected_out, atol=1e-3))
=== test i=0 ===
ResNet(
  (feature_extractor): Sequential(
    (0): ResidualBlock(
      (main_path): Sequential(
        (0): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (1): Dropout2d(p=0.1, inplace=False)
        (2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (3): LeakyReLU(negative_slope=0.01)
        (4): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (5): Dropout2d(p=0.1, inplace=False)
        (6): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (7): LeakyReLU(negative_slope=0.01)
        (8): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (9): Dropout2d(p=0.1, inplace=False)
        (10): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (11): LeakyReLU(negative_slope=0.01)
        (12): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      )
      (shortcut_path): Sequential(
        (0): Identity()
        (1): Conv2d(3, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
      )
    )
    (1): AvgPool2d(kernel_size=2, stride=2, padding=0)
    (2): ResidualBlock(
      (main_path): Sequential(
        (0): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (1): Dropout2d(p=0.1, inplace=False)
        (2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (3): LeakyReLU(negative_slope=0.01)
        (4): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      )
      (shortcut_path): Sequential(
        (0): Identity()
      )
    )
  )
  (mlp): MLP(
    (layers): ModuleList(
      (0): Linear(in_features=160000, out_features=100, bias=True)
      (1): LeakyReLU(negative_slope=0.01)
      (2): Linear(in_features=100, out_features=100, bias=True)
      (3): LeakyReLU(negative_slope=0.01)
      (4): Linear(in_features=100, out_features=10, bias=True)
      (5): Identity()
    )
  )
)
test_out=tensor([[ 0.0422,  0.0332,  0.1870, -0.0532, -0.0742,  0.1143, -0.0617, -0.0467,
          0.0852,  0.0221]], grad_fn=<AddmmBackward0>)
expected_out=tensor([[ 0.0422,  0.0332,  0.1870, -0.0532, -0.0742,  0.1143, -0.0617, -0.0467,
          0.0852,  0.0221]], requires_grad=True)
max_diff=8.195638656616211e-08

=== test i=1 ===
ResNet(
  (feature_extractor): Sequential(
    (0): ResidualBlock(
      (main_path): Sequential(
        (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (1): Dropout2d(p=0.1, inplace=False)
        (2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (3): Tanh()
        (4): Conv2d(64, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (5): Dropout2d(p=0.1, inplace=False)
        (6): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (7): Tanh()
        (8): Conv2d(16, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      )
      (shortcut_path): Sequential(
        (0): Identity()
        (1): Conv2d(3, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
      )
    )
    (1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (2): ResidualBottleneckBlock(
      (main_path): Sequential(
        (0): Conv2d(64, 16, kernel_size=(1, 1), stride=(1, 1))
        (1): Dropout2d(p=0.1, inplace=False)
        (2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (3): Tanh()
        (4): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (5): Dropout2d(p=0.1, inplace=False)
        (6): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (7): Tanh()
        (8): Conv2d(16, 64, kernel_size=(1, 1), stride=(1, 1))
      )
      (shortcut_path): Sequential(
        (0): Identity()
      )
    )
    (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (4): ResidualBottleneckBlock(
      (main_path): Sequential(
        (0): Conv2d(64, 16, kernel_size=(1, 1), stride=(1, 1))
        (1): Dropout2d(p=0.1, inplace=False)
        (2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (3): Tanh()
        (4): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (5): Dropout2d(p=0.1, inplace=False)
        (6): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (7): Tanh()
        (8): Conv2d(16, 64, kernel_size=(1, 1), stride=(1, 1))
      )
      (shortcut_path): Sequential(
        (0): Identity()
      )
    )
    (5): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (6): ResidualBottleneckBlock(
      (main_path): Sequential(
        (0): Conv2d(64, 16, kernel_size=(1, 1), stride=(1, 1))
        (1): Dropout2d(p=0.1, inplace=False)
        (2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (3): Tanh()
        (4): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (5): Dropout2d(p=0.1, inplace=False)
        (6): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (7): Tanh()
        (8): Conv2d(16, 64, kernel_size=(1, 1), stride=(1, 1))
      )
      (shortcut_path): Sequential(
        (0): Identity()
      )
    )
    (7): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  )
  (mlp): MLP(
    (layers): ModuleList(
      (0): Linear(in_features=2304, out_features=64, bias=True)
      (1): Tanh()
      (2): Linear(in_features=64, out_features=5, bias=True)
      (3): Identity()
    )
  )
)
test_out=tensor([[ 0.0237, -0.1945, -0.0085, -0.4024, -0.2667]],
       grad_fn=<AddmmBackward0>)
expected_out=tensor([[ 0.0237, -0.1945, -0.0085, -0.4024, -0.2667]], requires_grad=True)
max_diff=2.384185791015625e-07

Questions¶

TODO Answer the following questions. Write your answers in the appropriate variables in the module hw2/answers.py.

In [11]:
from cs236781.answers import display_answer
import hw2.answers

Question 1¶

Consider the bottleneck block from the right side of the ResNet diagram above. Compare it to a regular block that performs a two 3x3 convs directly on the 256-channel input (i.e. as shown in the left side of the diagram, with a different number of channels). Explain the differences between the regular block and the bottleneck block in terms of:

  1. Number of parameters. Calculate the exact numbers for these two examples.
  2. Number of floating point operations required to compute an output (qualitative assessment).
  3. Ability to combine the input: (1) spatially (within feature maps); (2) across feature maps.
In [12]:
display_answer(hw2.answers.part4_q1)

Your answer:

  1. in General, the 1x1 convolution reduce the number of parameters in the bottleneck block. in our case direct calculation gives:

for the regular block we have two layers with kernel 3x3 and 256 input and output channels that gives (including bias): $(3*3*256+1)*256*2=1180160$ for the bottleneck we have 3 layers. number of parameters is: $(1*1*256+1)*64+ (3*3*64+1)*64 + (1*1*64 + 1)*256=70016$ for the bottleneck. We see that also the bottleneck block has more layers, it has much fewer parameters in total

  1. The bottleneck block requires fewer floating point operations than the regular block. This is because the 1x1 convolutions reduce and then increase the number of channels, allowing for more efficient computation of the subsequent 3x3 convolution. Therefore, the bottleneck block requires fewer computations and has lower computational complexity than the regular block.

  2. Spatially within Feature Maps: as both blocks has convolutions with the same kernel size (3x3), both can combine inputs spatially within feature maps by applying the convolution operation. the 1x1 convolution is of no benefit since it has no spatial resolution.


Across Feature Maps: The bottleneck block is better at combining inputs across feature maps because it uses different number of feature maps. the 1x1 convolution is used to reduce at the beginning and increase at the end the number of channels. This allows coarse graining and enables the block to capture more complex features across feature maps.
In [ ]:
 
$$ \newcommand{\mat}[1]{\boldsymbol {#1}} \newcommand{\mattr}[1]{\boldsymbol {#1}^\top} \newcommand{\matinv}[1]{\boldsymbol {#1}^{-1}} \newcommand{\vec}[1]{\boldsymbol {#1}} \newcommand{\vectr}[1]{\boldsymbol {#1}^\top} \newcommand{\rvar}[1]{\mathrm {#1}} \newcommand{\rvec}[1]{\boldsymbol{\mathrm{#1}}} \newcommand{\diag}{\mathop{\mathrm {diag}}} \newcommand{\set}[1]{\mathbb {#1}} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\pderiv}[2]{\frac{\partial #1}{\partial #2}} \newcommand{\bb}[1]{\boldsymbol{#1}} $$

Part 5: Convolutional Architecture Experiments¶

In this part we will explore convolution networks and the effects of their architecture on accuracy. We'll use our deep CNN implementation and perform various experiments on it while varying the architecture. Then we'll implement our own custom architecture to see whether we can get high classification results on a large subset of CIFAR-10.

Training will be performed on GPU.

In [1]:
import os
import re
import sys
import glob
import numpy as np
import matplotlib.pyplot as plt
import unittest
import torch
import torchvision
import torchvision.transforms as tvtf

%matplotlib inline
%load_ext autoreload
%autoreload 2
In [2]:
seed = 42
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
plt.rcParams.update({'font.size': 12})
test = unittest.TestCase()

Experimenting with model architectures¶

We will now perform a series of experiments that train various model configurations on a part of the CIFAR-10 dataset.

To perform the experiments, you'll need to use a machine with a GPU since training time might be too long otherwise.

Note about running on GPUs¶

Here's an example of running a forward pass on the GPU (assuming you're running this notebook on a GPU-enabled machine).

In [3]:
from hw2.cnn import ResNet

net = ResNet(
    in_size=(3,100,100), out_classes=10, channels=[32, 64]*3,
    pool_every=4, hidden_dims=[100]*2,
    pooling_type='avg', pooling_params=dict(kernel_size=2),
)
net = net.to(device)

test_image = torch.randint(low=0, high=256, size=(3, 100, 100), dtype=torch.float).unsqueeze(0)
test_image = test_image.to(device)

test_out = net(test_image)

Notice how we called .to(device) on both the model and the input tensor. Here the device is a torch.device object that we created above. If an nvidia GPU is available on the machine you're running this on, the device will be 'cuda'. When you run .to(device) on a model, it recursively goes over all the model parameter tensors and copies their memory to the GPU. Similarly, calling .to(device) on the input image also copies it.

In order to train on a GPU, you need to make sure to move all your tensors to it. You'll get errors if you try to mix CPU and GPU tensors in a computation.

In [4]:
print(f'This notebook is running with device={device}')
print(f'The model parameter tensors are also on device={next(net.parameters()).device}')
print(f'The test image is also on device={test_image.device}')
print(f'The output is therefore also on device={test_out.device}')
This notebook is running with device=cpu
The model parameter tensors are also on device=cpu
The test image is also on device=cpu
The output is therefore also on device=cpu

Notes on using course servers¶

First, please read the course servers guide carefully.

To run the experiments on the course servers, you can use the py-sbatch.sh script directly to perform a single experiment run in batch mode (since it runs python once), or use the srun command to do a single run in interactive mode. For example, running a single run of experiment 1 interactively (after conda activate of course):

srun -c 2 --gres=gpu:1 --pty python -m hw2.experiments run-exp -n test -K 32 64 -L 2 -P 2 -H 100

To perform multiple runs in batch mode with sbatch (e.g. for running all the configurations of an experiments), you can create your own script based on py-sbatch.sh and invoke whatever commands you need within it.

Don't request more than 2 CPU cores and 1 GPU device for your runs. The code won't be able to utilize more than that anyway, so you'll see no performance gain if you do. It will only cause delays for other students using the servers.

General notes for running experiments¶

  • You can run the experiments on a different machine (e.g. the course servers) and copy the results (files) to the results folder on your local machine. This notebook will only display the results, not run the actual experiment code (except for a demo run).
  • It's important to give each experiment run a name as specified by the notebook instructions later on. Each run has a run_name parameter that will also be the base name of the results file which this notebook will expect to load.
  • You will implement the code to run the experiments in the hw2/experiments.py module. This module has a CLI parser so that you can invoke it as a script and pass in all the configuration parameters for a single experiment run.
  • You should use python -m hw2.experiments run-exp to run an experiment, and not python hw2/experiments.py run-exp, regardless of how/where you run it.

Experiment 1: Network depth and number of filters¶

In [5]:
from hw2.experiments import load_experiment, cnn_experiment
from cs236781.plot import plot_fit


# # Test experiment1 implementation on a few data samples and with a small model
# cnn_experiment(
#     'test_run', seed=seed, bs_train=50, batches=10, epochs=10, early_stopping=5,
#     filters_per_layer=[32,64], layers_per_block=1, optimizer='Adam',
#     model_type='resnet'
# )

# # There should now be a file 'test_run.json' in your `results/` folder.
# # We can use it to load the results of the experiment.
# cfg, fit_res = load_experiment('results/test_run_L1_K32-64.json')
# _, _ = plot_fit(fit_res, train_test_overlay=True)

# # And `cfg` contains the exact parameters to reproduce it
# print('experiment config: ', cfg)

In this part we will test some different architecture configurations based on our CNN and ResNet. Specifically, we want to try different depths and number of features to see the effects these parameters have on the model's performance.

To do this, we'll define two extra hyperparameters for our model, K (filters_per_layer) and L (layers_per_block).

  • K is a list, containing the number of filters we want to have in our conv layers.
  • L is the number of consecutive layers with the same number of filters to use.

For example, if K=[32, 64] and L=2 it means we want two conv layers with 32 filters followed by two conv layers with 64 filters. If we also use pool_every=3, the feature-extraction part of our model will be:

Conv(X,32)->ReLu->Conv(32,32)->ReLU->Conv(32,64)->ReLU->MaxPool->Conv(64,64)->ReLU

We'll try various values of the K and L parameters in combination and see how each architecture trains. All other hyperparameters are up to you, including the choice of the optimization algorithm, the learning rate, regularization and architecture hyperparams such as pool_every and hidden_dims. Note that you should select the pool_every parameter wisely per experiment so that you don't end up with zero-width feature maps.

You can try some short manual runs to determine some good values for the hyperparameters or implement cross-validation to do it. However, the dataset size you test on should be large. If you limit the number of batches, make sure to use at least 30000 training images and 5000 validation images.

The important thing is that you state what you used, how you decided on it, and explain your results based on that.

First we need to write some code to run the experiment.

TODO:

  1. Implement the cnn_experiment() function in the hw2/experiments.py module.
  2. If you haven't done so already, it would be an excellent idea to implement the early stopping feature of the Trainer class.

The following block tests that your implementation works. It's also meant to show you that each experiment run creates a result file containing the parameters to reproduce and the FitResult object for plotting.

We'll use the following function to load multiple experiment results and plot them together.

In [6]:
def plot_exp_results(filename_pattern, results_dir='results'):
    fig = None
    result_files = glob.glob(os.path.join(results_dir, filename_pattern))
    result_files.sort()
    if len(result_files) == 0:
        print(f'No results found for pattern {filename_pattern}.', file=sys.stderr)
        return
    for filepath in result_files:
        m = re.match('exp\d_(\d_)?(.*)\.json', os.path.basename(filepath))
        cfg, fit_res = load_experiment(filepath)
        fig, axes = plot_fit(fit_res, fig, legend=m[2],log_loss=True)
    del cfg['filters_per_layer']
    del cfg['layers_per_block']
    print('common config: ', cfg)
In [7]:
# from hw2.experiments import run_optuna_experiment
# run_optuna_experiment("optuna_resnet_Adam", filters_per_layer=[64, 128, 256], layers_per_block=2, subset=10000, n_trials=30)

Experiment 1.1: Varying the network depth (L)¶

First, we'll test the effect of the network depth on training.

Configuratons:

  • K=32 fixed, with L=2,4,8,16 varying per run
  • K=64 fixed, with L=2,4,8,16 varying per run

So 8 different runs in total.

Naming runs: Each run should be named exp1_1_L{}_K{} where the braces are placeholders for the values. For example, the first run should be named exp1_1_L2_K32.

TODO: Run the experiment on the above configuration with the CNN model. Make sure the result file names are as expected. Use the following blocks to display the results.

In [8]:
plot_exp_results('exp1_1_L*_K32*.json')
common config:  {'run_name': 'exp1_1', 'out_dir': './results', 'seed': 42, 'device': None, 'bs_train': 128, 'bs_test': 32, 'batches': 100, 'epochs': 100, 'early_stopping': 10, 'checkpoints': None, 'lr': 0.003007970289449718, 'reg': 0.008128033924872312, 'pool_every': 2, 'hidden_dims': [512, 512], 'model_type': 'cnn', 'activation_type': 'relu', 'activation_params': {}, 'pooling_type': 'max', 'pooling_params': {'kernel_size': 2}, 'batchnorm': True, 'dropout': 0.1633292046324038, 'bottleneck': False, 'loss_fn': 'cross entropy', 'optimizer': 'Adam', 'hp_optim': {'betas': [0.9969682475071221, 0.9835809796844466]}, 'subset': False}
In [9]:
plot_exp_results('exp1_1_L*_K64*.json')
common config:  {'run_name': 'exp1_1', 'out_dir': './results', 'seed': 42, 'device': None, 'bs_train': 128, 'bs_test': 32, 'batches': 100, 'epochs': 100, 'early_stopping': 10, 'checkpoints': None, 'lr': 0.003007970289449718, 'reg': 0.008128033924872312, 'pool_every': 2, 'hidden_dims': [512, 512], 'model_type': 'cnn', 'activation_type': 'relu', 'activation_params': {}, 'pooling_type': 'max', 'pooling_params': {'kernel_size': 2}, 'batchnorm': True, 'dropout': 0.1633292046324038, 'bottleneck': False, 'loss_fn': 'cross entropy', 'optimizer': 'Adam', 'hp_optim': {'betas': [0.9969682475071221, 0.9835809796844466]}, 'subset': False}

Experiment 1.2: Varying the number of filters per layer (K)¶

Now we'll test the effect of the number of convolutional filters in each layer.

Configuratons:

  • L=2 fixed, with K=[32],[64],[128] varying per run.
  • L=4 fixed, with K=[32],[64],[128] varying per run.
  • L=8 fixed, with K=[32],[64],[128] varying per run.

So 9 different runs in total. To clarify, each run K takes the value of a list with a single element.

Naming runs: Each run should be named exp1_2_L{}_K{} where the braces are placeholders for the values. For example, the first run should be named exp1_2_L2_K32.

TODO: Run the experiment on the above configuration with the CNN model. Make sure the result file names are as expected. Use the following blocks to display the results.

In [10]:
plot_exp_results('exp1_2_L2*.json')
common config:  {'run_name': 'exp1_2', 'out_dir': './results', 'seed': 42, 'device': None, 'bs_train': 128, 'bs_test': 32, 'batches': 100, 'epochs': 100, 'early_stopping': 10, 'checkpoints': None, 'lr': 0.003007970289449718, 'reg': 0.008128033924872312, 'pool_every': 1, 'hidden_dims': [512, 512], 'model_type': 'cnn', 'activation_type': 'relu', 'activation_params': {}, 'pooling_type': 'max', 'pooling_params': {'kernel_size': 2}, 'batchnorm': True, 'dropout': 0.1633292046324038, 'bottleneck': False, 'loss_fn': 'cross entropy', 'optimizer': 'Adam', 'hp_optim': {'betas': [0.9969682475071221, 0.9835809796844466]}, 'subset': False}
In [11]:
plot_exp_results('exp1_2_L4*.json')
common config:  {'run_name': 'exp1_2', 'out_dir': './results', 'seed': 42, 'device': None, 'bs_train': 128, 'bs_test': 32, 'batches': 100, 'epochs': 100, 'early_stopping': 10, 'checkpoints': None, 'lr': 0.003007970289449718, 'reg': 0.008128033924872312, 'pool_every': 1, 'hidden_dims': [512, 512], 'model_type': 'cnn', 'activation_type': 'relu', 'activation_params': {}, 'pooling_type': 'max', 'pooling_params': {'kernel_size': 2}, 'batchnorm': True, 'dropout': 0.1633292046324038, 'bottleneck': False, 'loss_fn': 'cross entropy', 'optimizer': 'Adam', 'hp_optim': {'betas': [0.9969682475071221, 0.9835809796844466]}, 'subset': False}
In [12]:
plot_exp_results('exp1_2_L8*.json')
common config:  {'run_name': 'exp1_2', 'out_dir': './results', 'seed': 42, 'device': None, 'bs_train': 128, 'bs_test': 32, 'batches': 100, 'epochs': 100, 'early_stopping': 10, 'checkpoints': None, 'lr': 0.003007970289449718, 'reg': 0.008128033924872312, 'pool_every': 2, 'hidden_dims': [512, 512], 'model_type': 'cnn', 'activation_type': 'relu', 'activation_params': {}, 'pooling_type': 'max', 'pooling_params': {'kernel_size': 2}, 'batchnorm': True, 'dropout': 0.1633292046324038, 'bottleneck': False, 'loss_fn': 'cross entropy', 'optimizer': 'Adam', 'hp_optim': {'betas': [0.9969682475071221, 0.9835809796844466]}, 'subset': False}

Experiment 1.3: Varying both the number of filters (K) and network depth (L)¶

Now we'll test the effect of the number of convolutional filters in each layer.

Configuratons:

  • K=[64, 128] fixed with L=2,3,4 varying per run.

So 3 different runs in total. To clarify, each run K takes the value of an array with a two elements.

Naming runs: Each run should be named exp1_3_L{}_K{}-{} where the braces are placeholders for the values. For example, the first run should be named exp1_3_L2_K64-128.

TODO: Run the experiment on the above configuration with the CNN model. Make sure the result file names are as expected. Use the following blocks to display the results.

In [13]:
plot_exp_results('exp1_3*.json')
common config:  {'run_name': 'exp1_3', 'out_dir': './results', 'seed': 42, 'device': None, 'bs_train': 128, 'bs_test': 32, 'batches': 100, 'epochs': 100, 'early_stopping': 10, 'checkpoints': None, 'lr': 0.003007970289449718, 'reg': 0.008128033924872312, 'pool_every': 2, 'hidden_dims': [512, 512], 'model_type': 'cnn', 'activation_type': 'relu', 'activation_params': {}, 'pooling_type': 'max', 'pooling_params': {'kernel_size': 2}, 'batchnorm': True, 'dropout': 0.1633292046324038, 'bottleneck': False, 'loss_fn': 'cross entropy', 'optimizer': 'Adam', 'hp_optim': {'betas': [0.9969682475071221, 0.9835809796844466]}, 'subset': False}

Experiment 1.4: Adding depth with Residual Networks¶

Now we'll test the effect of skip connections on the training and performance.

Configuratons:

  • K=[32] fixed with L=8,16,32 varying per run.
  • K=[64, 128, 256] fixed with L=2,4,8 varying per run.

So 6 different runs in total.

Naming runs: Each run should be named exp1_4_L{}_K{}-{}-{} where the braces are placeholders for the values.

TODO: Run the experiment on the above configuration with the ResNet model. Make sure the result file names are as expected. Use the following blocks to display the results.

In [14]:
plot_exp_results('exp1_4_L*_K32.json')
common config:  {'run_name': 'exp1_4', 'out_dir': './results', 'seed': 42, 'device': None, 'bs_train': 128, 'bs_test': 32, 'batches': 100, 'epochs': 100, 'early_stopping': 10, 'checkpoints': None, 'lr': 0.004447877019908671, 'reg': 0.0037125311935656893, 'pool_every': 2, 'hidden_dims': [768, 768], 'model_type': 'resnet', 'activation_type': 'relu', 'activation_params': {}, 'pooling_type': 'max', 'pooling_params': {'kernel_size': 2}, 'batchnorm': True, 'dropout': 0.1335296994519493, 'bottleneck': False, 'loss_fn': 'cross entropy', 'optimizer': 'Adam', 'hp_optim': {'betas': [0.9074547592651931, 0.8399910531349177]}, 'subset': False}
In [15]:
plot_exp_results('exp1_4_L*_K64*.json')
common config:  {'run_name': 'exp1_4', 'out_dir': './results', 'seed': 42, 'device': None, 'bs_train': 128, 'bs_test': 32, 'batches': 100, 'epochs': 100, 'early_stopping': 10, 'checkpoints': None, 'lr': 0.004447877019908671, 'reg': 0.0037125311935656893, 'pool_every': 8, 'hidden_dims': [768, 768], 'model_type': 'resnet', 'activation_type': 'relu', 'activation_params': {}, 'pooling_type': 'max', 'pooling_params': {'kernel_size': 2}, 'batchnorm': True, 'dropout': 0.1335296994519493, 'bottleneck': False, 'loss_fn': 'cross entropy', 'optimizer': 'Adam', 'hp_optim': {'betas': [0.9074547592651931, 0.8399910531349177]}, 'subset': False}

Questions¶

TODO Answer the following questions. Write your answers in the appropriate variables in the module hw2/answers.py.

In [16]:
from cs236781.answers import display_answer
import hw2.answers

Question 1¶

Analyze your results from experiment 1.1. In particular,

  1. Explain the effect of depth on the accuracy. What depth produces the best results and why do you think that's the case?
  2. Were there values of L for which the network wasn't trainable? what causes this? Suggest two things which may be done to resolve it at least partially.
In [17]:
display_answer(hw2.answers.part5_q1)

Your answer:

in the first experiment we tested the effect of varying depth with fixed channels in a CNN model. it can be seen from the graphs that for small number of layers (2,4) increasing the depth results in better accuracy. this is because more layers (with fixed number of channels) gives more spatial resolution and ability to learn more complex features within the image.


when the number of layers keeps growing, (above 4 in our case) the loss and accuracy are constant, meaning the gradients are zero. this is vanishing gradients issue that cause the larger models to become non-trainable. vanishing gradient is a problem of large models - since propagating the graidents through many layers causes the gradients to become very small and eventually zero. this is because the gradients are multiplied by the weights in each layer, and if the weights are smaller than 1, the gradients will become smaller and smaller. one way to solve this problem is by using batch normalization, which normalizes the input to each layer to have zero mean and unit variance. this allows the gradients to flow through the network without vanishing. another way is by using skip connections, which allow the gradients to flow directly to the lower layers without passing through the upper layers. this also allows the gradients to flow without vanishing.

Question 2¶

Analyze your results from experiment 1.2. In particular, compare to the results of experiment 1.1.

In [18]:
display_answer(hw2.answers.part5_q2)

Your answer:

in the second experiment we tested the effect of varying width with fixed depth in a CNN model. it can be seen from the graphs that for fixed depth, increasing the width results in a minor increase in accuracy (compared to the change we saw in exp1_1 when we changed the depth). for l=2 the difference between different widths is almost negligble, and for l=4 the difference is a little bit bigger. The effect of increasing the width is minor because the number of layers is fixed and therefore the spatial resolution is fixed. this means that the model can't learn more complex features, and therefore the effect of increasing the width is minor. moving from l=2 to l=4 reuslts in better preformence for all $K$ (i.e model with l=4 and k=32 is better than model with l=2 and k=128). this implies that depth is more important than number of channels. in addition, we can see that for l=8 we get vanishing. the performence of the best models (l=4) were similiar in both experiments. gradients, similiar to what we saw in exp_1_1.

Question 3¶

Analyze your results from experiment 1.3.

In [19]:
display_answer(hw2.answers.part5_q3)

Your answer:

in the third experiment we tested the effect of varying depth and width in a CNN model. it can be seen from the graphs that model with more than 4 layers in total in non_trainable in accordance to what we saw in exp1_1 and exp1_2. that leaves us with only 1 trainable model - $l=2$ $k=[64,128]$ which perfmored similiar to models from exp1_1 and exp1_2. with l=4 (which has the same total number of layers)

Question 4¶

Analyze your results from experiment 1.4. Compare to experiment 1.1 and 1.3.

In [20]:
display_answer(hw2.answers.part5_q4)

Your answer:

in experiment 1.4 we tested the effect of skip connections by using Resent. in experiment 1.1 and 1.3 we saw that models with more than 4 convolution layers in total faced vanishing of the gradients and became non-trainable. using skip connections (in 1.4) the phenomena no longer occur, all the model were trainable and we can train much deeper networks. we therefore conclude that the residual block, together with batchnorm, enables more uniform flow of the graidents and prevent them from vanising or exploding. regarding performance - as the hyperparameter space is very large (many parameters that spans a large range of values) and the models can be sensitive to the choice of hyperparameters, in order to fully utilize the models one needs an efficient method to tune them. we used optuna package which provides an optimization framework for efficient tuning large number of hyper-parameters. we run multiple optimization experiments for a seleceted architectures and small number of epochs for both resnet and cnn and based on the results wechose the parameters for the actual experiment. on this specific dataset and with the given architectures, we see that the resnet model with the lowerst number of layers converged first. this implies that the dataset is relatively simple and can be learned by simple network. we also see that
CNN outperform Resnet (altough the difference is not high and might change with more carefull tuning of hyper-parameters). however, on different CV tasks, usually the depth that Resnet allows was proved to lead to better results than shallow networks.

In [ ]:
 
$$ \newcommand{\mat}[1]{\boldsymbol {#1}} \newcommand{\mattr}[1]{\boldsymbol {#1}^\top} \newcommand{\matinv}[1]{\boldsymbol {#1}^{-1}} \newcommand{\vec}[1]{\boldsymbol {#1}} \newcommand{\vectr}[1]{\boldsymbol {#1}^\top} \newcommand{\rvar}[1]{\mathrm {#1}} \newcommand{\rvec}[1]{\boldsymbol{\mathrm{#1}}} \newcommand{\diag}{\mathop{\mathrm {diag}}} \newcommand{\set}[1]{\mathbb {#1}} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\pderiv}[2]{\frac{\partial #1}{\partial #2}} \newcommand{\bb}[1]{\boldsymbol{#1}} $$

Part 6: YOLO - Objects Detection¶

In this part we will use an object detection architecture called YOLO (You only look once) to detect objects in images. We'll use an already trained model weights (v5) found here: https://github.com/ultralytics/yolov5

In [1]:
import torch
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

# Load the YOLO model
model = torch.hub.load("ultralytics/yolov5", "yolov5s")
model.to(device)
# Images
img1 = 'imgs/DolphinsInTheSky.jpg'  
img2 = 'imgs/cat-shiba-inu-2.jpg' 
Using cache found in /home/ilay.kamai/.cache/torch/hub/ultralytics_yolov5_master
YOLOv5 🚀 2023-5-15 Python-3.8.12 torch-1.10.1 CPU

requirements: /home/ilay.kamai/.cache/torch/hub/requirements.txt not found, check failed.
Fusing layers... 
YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients
Adding AutoShape... 

Inference with YOLO¶

You are provided with 2 images (img1 and img2). TODO:

  1. Detect objects using the YOLOv5 model for these 2 images.

  2. Print the inference output with bounding boxes.

  3. Calculate the number of pixels within a bounding box and the number in the background.

    Hint: Given you stored the model output in a varibale named 'results', you may find 'results.pandas().xyxy' helpful

  4. Look at the inference results and answer the question below.

In [2]:
#Insert the inference code here.
import math
import cv2
import numpy as np
# from torch.nn.functional import pad
from matplotlib import pyplot as plt
%matplotlib inline
def plot_boxes(img, df):
    """
    Plot bounding boxes on image
    """
    colors = plt.cm.hsv(np.linspace(0, 1, len(df)+1).tolist())
    classes = df['class']
    plt.imshow(img)
    plt.axis('off')
    tot_box_size = 0
    ref_img = torch.zeros((img.shape[0], img.shape[1]))
    for i in range(len(df)):
        row = df.iloc[i]
        x1,y1,x2,y2 = row['xmin'], row['ymin'], row['xmax'], row['ymax']
        conf = row['confidence']
        name = row['name']
        cls = row['class']
        ref_img[int(y1):int(y2), int(x1):int(x2)] =1
        tot_box_size += int((x2-x1)*(y2-y1))
#         x1, y1, x2, y2 = df[['xmin', 'ymin', 'xmax', 'ymax']].values.tolist()
#         conf = confs[i]
#         cls = classes[i]
        color = colors[np.where(classes==cls)[0][0]]
        label = f"{name}: {conf:.2f}"
        y_offset = int(img.shape[0]*0.07)
        plt.gca().add_patch(plt.Rectangle((x1, y1), x2-x1, y2-y1, fill=False, linewidth=2, edgecolor=color))
        plt.gca().text(x1, y1-y_offset, label,color=color, fontsize=10, ha='left', va='top')
    tot_pixels = img.shape[0]*img.shape[1]
    box_pixels = len(torch.where(ref_img)[0])
    background_pixels = len(torch.where(ref_img==0)[0])
    print("total number of pixels in all bounding boxes: ", box_pixels, " {:.2f} of image pixels".format(box_pixels/tot_pixels))
    print("total number of pixels in background: ", background_pixels, " {:.2f} of image pixels".format(background_pixels/tot_pixels))
    
    plt.show()
img1_arr = cv2.imread(img1)[:,:,::-1] 
img2_arr = cv2.imread(img2)[:,:,::-1]
# plt.imshow(img2_arr)
# plt.show()
for im in [img1, img2]:
    print(im)
    with torch.no_grad():
        results = model(im)
        df = results.pandas().xyxy[0]
        # print(df)
        im_arr = cv2.imread(im)[:,:,::-1]
        plot_boxes(im_arr, df)
imgs/DolphinsInTheSky.jpg
total number of pixels in all bounding boxes:  13173  0.26 of image pixels
total number of pixels in background:  37152  0.74 of image pixels
imgs/cat-shiba-inu-2.jpg
total number of pixels in all bounding boxes:  406433  0.72 of image pixels
total number of pixels in background:  156067  0.28 of image pixels

Question 1¶

Analyze the inference results of the 2 images.

  1. How well did the model detect the objects in the pictures?
  2. What can possibly be the reason for the model failures? suggest methods to resolve that issue.
In [3]:
from cs236781.answers import display_answer
import hw2.answers
In [4]:
display_answer(hw2.answers.part6_q1)

Your answer:

in generall, the model did not performed well on those pictures. in the first image there are three dolphins. the model located on of the bounding box correctly, but the other two bounding boxes are not accurate. the labels is also not accurate - the model interperted the scene as persons on a surfboard, and therefore labeled the dolphins as "person" or "surfboard". this implies a bias in the dataset the model was trained on - the model was trained on a dataset with many images of persons on surfboards, and therefore when it see an object above the water it interpert it as a person on a surfboard. in the second image there are three dogs and one cat close to each other. here, the model located 2 bounding boxes over cat and dog together and labeled them as a cat. possible reasons for the poor performance are: small number of classes - even when the model located the bounding box correctly, it labeled the object not correctly. this is because the model was trained on a dataset with limited number of classes (the basic model was trained on COCO dataset which has 80 classes). another reason is occulusion - in the second image the cat is occluded by the dog, and the model failed to locate it.


to resolve those issues we can:
1. train the model on a dataset with more classes (e.g imagenet) and fine tune it on our dataset.
2. train the model on a dataset with more variability per class - many instances of the same class in different poses.
3. train the model with different size of bounding boxes to allow the model better seperate between close objects.
4. change the number of bounding boxes per grid cell to allow the model to locate more objects in the same grid cell.

Creative Detection Failures¶

Object detection pitfalls could be, for example: occlusion - when the objects are partially occlude, and thus missing important features, model bias - when a model learn some bias about an object, it could recognize it as something else in a different setup, and many others like Deformation, Illumination conditions, Cluttered or textured background and blurring due to moving objects.

TODO: Take pictures and that demonstrates 3 of the above object detection pitfalls, run inference and analyze the results.

In [5]:
import os
imgs = [os.path.join("imgs/YOLO", f) for f in os.listdir("imgs/YOLO") if os.path.splitext(f)[1] in [".jpg", ".jpeg", ".png"]]
for im in imgs:
    print(im)
    with torch.no_grad():
        results = model(im)
        df = results.pandas().xyxy[0]
        im_arr = cv2.imread(im)[:,:,::-1]
        plot_boxes(im_arr, df)
imgs/YOLO/ducks.png
total number of pixels in all bounding boxes:  22588  0.09 of image pixels
total number of pixels in background:  216728  0.91 of image pixels
imgs/YOLO/cow_and_cat.jpeg
total number of pixels in all bounding boxes:  372931  0.54 of image pixels
total number of pixels in background:  316009  0.46 of image pixels
imgs/YOLO/cat-shiba-inu-2.jpg
total number of pixels in all bounding boxes:  406433  0.72 of image pixels
total number of pixels in background:  156067  0.28 of image pixels
imgs/YOLO/shadow.jpeg
total number of pixels in all bounding boxes:  3345615  0.19 of image pixels
total number of pixels in background:  14570289  0.81 of image pixels
imgs/YOLO/bear2.jpg
total number of pixels in all bounding boxes:  523076  0.48 of image pixels
total number of pixels in background:  556924  0.52 of image pixels
imgs/YOLO/DolphinsInTheSky.jpg
total number of pixels in all bounding boxes:  13173  0.26 of image pixels
total number of pixels in background:  37152  0.74 of image pixels

Question 3¶

Analyize the results of the inference.

  1. How well did the model detect the objects in the pictures? explain.
In [6]:
display_answer(hw2.answers.part6_q3)

Your answer:

the model shows different drawbacks in each one of the images. the first image shows many many little ducks on the road together with people and cars. the model catches the people baut fail to catch any of the ducks. this might be because they are very small and occluded objects (they masking each other and looks like one big object). it also don't recognize any of the cars at the background possibly because of hard light conditions. the second image shows a cow licking a cat. this time the model accurately puts bounding boxes and classify the cat but he misclassify the cow as a dog. this might be because of model bias - this scene is not normal and probably most of the images the model was trained on with some animal licking a cat was of dog or another cat. therefore, when the model see a cat licked by another animal, it mistakenly interpert it as a dog (or a cat) where's here its a cow. the third image is very interesting - we see a bear in a camping playing with a stove. the model gives two, almost identically, bounding boxes around the bear one labeld as a bear and the other one (with lower probabilty) as a cat. it is possible that two factors cause the confusion -


1. the bear's head is down. this is a form of deformation that changes the shape of the bear and makes it harder to classify
2. this is another unusual scene (bear playing with a stove) and the classification of cat might relates to model bias (cat's are more common to play with humans stuff)
the last image is of man and dog in the dark. the model do not recognize the man and recognize the dog as a horse. this is because the dark ilumination blur most of the features the helps the model recognize the objects.

Bonus¶

Try improving the model performance over poorly recognized images by changing them. Describe the manipulations you did to the pictures.

In [7]:
import torchvision.transforms as T
from PIL import Image
torch.manual_seed(0)
aug = T.AutoAugment()
imgs_to_aug = ['imgs/YOLO/ducks.png', 'imgs/YOLO/shadow.jpeg']
aug_imgs = [aug(Image.open(im)) for im in imgs_to_aug]
for name, im in zip(imgs_to_aug, aug_imgs):
    print(name)
    split = os.path.splitext(name)
    new_path = f"{split[0]}_aug{split[1]}"
    print(new_path)
    im.save(name)
    
imgs/YOLO/ducks.png
imgs/YOLO/ducks_aug.png
imgs/YOLO/shadow.jpeg
imgs/YOLO/shadow_aug.jpeg
In [8]:
imgs = [os.path.join("imgs/YOLO", f) for f in os.listdir("imgs/YOLO") if os.path.splitext(f)[1] in [".jpg", ".jpeg", ".png"]]

class AugmentYOLO(torch.nn.Module):
    def __init__(self, yolo):
        super().__init__()
        self.yolo = yolo
    def forward(self, x, **kw):
        return self.yolo(x, **kw)
    def forward_augment(self, x):
        img_size = x.shape[-2:]  # height, width
        s = [1, 0.83, 0.67, 0.4]  # scales
        f = [None, 3, None, 2, None]  # flips (2-ud, 3-lr)
        y = []  # outputs
        for si, fi in zip(s, f):
            xi = scale_img(x.flip(fi) if fi else x, si, gs=int(self.stride.max()))
            yi = self.forward_once(xi)[0]  # forward
            # cv2.imwrite(f'img_{si}.jpg', 255 * xi[0].cpu().numpy().transpose((1, 2, 0))[:, :, ::-1])  # save
            yi = self._descale_pred(yi, fi, si, img_size)
            y.append(yi)
        return torch.cat(y, 1), None  # augmented inference, train
a_yolo = AugmentYOLO(model)

for im in imgs:
    print(im)
    with torch.no_grad():
        results = a_yolo(im, augment=True)
        df = results.pandas().xyxy[0]
        print(set(df['name']))
        im_arr = cv2.imread(im)[:,:,::-1]
        plot_boxes(im_arr, df)
    
imgs/YOLO/ducks.png
{'bus', 'motorcycle', 'car', 'person', 'truck'}
total number of pixels in all bounding boxes:  24994  0.10 of image pixels
total number of pixels in background:  214322  0.90 of image pixels
imgs/YOLO/cow_and_cat.jpeg
{'cat', 'dog'}
total number of pixels in all bounding boxes:  599211  0.87 of image pixels
total number of pixels in background:  89729  0.13 of image pixels
imgs/YOLO/cat-shiba-inu-2.jpg
{'cat', 'dog'}
total number of pixels in all bounding boxes:  406304  0.72 of image pixels
total number of pixels in background:  156196  0.28 of image pixels
imgs/YOLO/shadow.jpeg
{'horse'}
total number of pixels in all bounding boxes:  7059074  0.39 of image pixels
total number of pixels in background:  10856830  0.61 of image pixels
imgs/YOLO/bear2.jpg
{'suitcase', 'cell phone', 'dining table', 'cat', 'bear', 'chair'}
total number of pixels in all bounding boxes:  744540  0.69 of image pixels
total number of pixels in background:  335460  0.31 of image pixels
imgs/YOLO/DolphinsInTheSky.jpg
{'person', 'bird', 'kite'}
total number of pixels in all bounding boxes:  12683  0.25 of image pixels
total number of pixels in background:  37642  0.75 of image pixels
In [9]:
display_answer(hw2.answers.part6_bonus)

Your answer:

we did two types of image augmentations - first we did general, automatic augmentation using torchvision autoaugment. this function uses a general augmentation policy defined by IMAGENET dataset. as not all images had brightness issues, and autoaugment change the brightness, we apply this only to images that we suspected to have benefit from brightness adjustment. next, we applied geometrical augmentation on all images. this is did using test time augmentation (tta) funcionality inside YOLOv5. to do that we created a custom YOLO class which overrided the forward_augment method that takes care of the tta proccess. we can see that the results improved a lot by those augmentations - the cars that were not indentified are now identified very well. also the person in the dark now has bounding box (but with the wrong class). another example that was improved is the cat and the dogs - the model now locates the bounding box accurately and classiifed 2 of the animals correctly. to summarize - using simple augmenatations like autoaugment and tta the predictions can improved significantly with zero effort.